Comment
 Kamil’s notes
Home
By subject
By category
By author
🕵🏻‍♂️ Logic pro­gram­ming and solvers
Technology💻Programming
ArticlesKamil SzczerbaKamil Szczerba

Func­tional pro­gram­ming has be­come quite pop­u­lar in the last decade. Most peo­ple’s at­ten­tion is now on ma­chine learn­ing. What if I told you there was one last El Do­rado, one last un­ex­ploited pro­gram­ming par­a­digm?

In this ar­ti­cle, I will try to show the po­ten­tial uses of logic pro­gram­ming and solvers:

  1. Pro­log on the case
    How nat­ural-lan­guage con­straints can be trans­lated into logic pro­gram­ming
  2. Pro­log’s al­go­rithm
    How the ba­sic logic pro­gram­ming al­go­rithm works
  3. Solvers
    How solvers can solve dif­fi­cult prob­lems
  4. Un­sat­is­fi­able cores
    How they can cap­ture the in­tent of the user
  5. Other uses
    Fur­ther leads
Pro­log on the case

We will imag­ine a sit­u­a­tion, and see when you will find the an­swer. No pub­licly avail­able AI, at the time of the pub­li­ca­tion, has been able to find the so­lu­tion. In­deed, they ex­cel in cre­ative en­deav­ours, but have trou­ble be­ing logic and con­sis­tent. And that is where logic pro­gram­ming shines. Since large lan­guage mod­els (LLM) can trans­late nat­ural-lan­guage state­ments into logic code, they work syn­er­get­i­cally.

For those who do not know Pro­log, it looks like this:

It means: the fact pa­ra­me­terised by Parameter1, Parameter2 is true if condition1(...) and condition2(...) are true. I will trans­late the story facts into Pro­log. Vari­ables are cap­i­talised. Now that this is done, let us in­ves­ti­gate.

You are en­joy­ing a stay in a manor with some guests. Only him and you have the key to the study room.

You de­cide to join him. The door be­ing locked, you use your key and lock it again, once in­side the dark room. Only the cen­tre, where your friend is sit­ting, is lit and vis­i­ble. How­ever, when you go to­wards him, you re­alise that some­thing is off: his key is ly­ing at his feet, and your friend is hold­ing a knife… stabbed in his chest. What has hap­pened?

Run­ning this code will down­load 11MB for Pro­log.

There was only him in the room. A sui­cide? How­ever, on closer in­spec­tion, you no­tice that the back wound is larger than the chest wound. How­ever, the tip of the knife is thin­ner. The fa­tal wound was in the back…?

We are now sure there is a third party who killed your friend. What can we learn about him?

The killer is with you, in the room, and has cer­tainly seen you.

This is the de­duc­tion no LLM could do. Let us see how Pro­log could do it.

Pro­log’s al­go­rithm

This sec­tion sup­poses you know the ba­sic Boolean al­ge­bra. You can skip it if you are more in­ter­ested in the tech­no­log­i­cal pos­si­bil­i­ties.

Here are the rules:

  1. Pro­log adds facts (clauses in our equa­tions) with a con­junc­tion (\land).
  2. It adds the nega­tion of our query, and checks whether the to­tal is false. In­deed, when you have a list of facts you know to be true, just adding the query is not help­ful. Whereas, if you add its nega­tion, and ob­serve a con­tra­dic­tion, it means the orig­i­nal query is true.
  3. Clauses are uni­fied with the rule: (AB)(¬AC)BC\left(A \lor B\right) \land \left(\lnot A \lor C\right) \rightarrow B \lor C. In­deed, AA is ei­ther truetrue or falsefalse. When it is truetrue, CC is truetrue, since (¬AC)\left(\lnot A \lor C\right) is truetrue. When it is falsefalse, BB is truetrue, since (AB)\left(A \lor B\right) is true. There­fore, BCB \lor C is true.

Here is an ex­am­ple:

The trans­la­tion into clauses (joined by \lands) is:

  1. man(Socrates)man\left( Socrates \right)
  2. x,man(x)mortal(x)\forall x, man\left( x \right) \rightarrow mortal\left( x \right) (or in ba­sic op­er­a­tors, x,¬man(x)mortal(x)\forall x, \lnot man( x ) \lor mortal\left( x \right))
  3. ¬mortal(Mortal)\lnot mortal\left( Mortal \right) (¬\lnot is the nega­tion or the query, the source of the con­tra­dic­tion)

Then, Pro­log pro­ceeds by pat­tern match­ing, start­ing with our query. Since we also have the same pat­tern(mortalmortal) in (2), and it starts with \forall, we in­stan­ci­ate (2) with x=Mortalx = Mortal:

  1. ¬man(Mortal)mortal(Mortal)\lnot man( Mortal ) \lor mortal\left( Mortal \right)

By com­bin­ing (3) and (4), we ob­tain:

  1. ¬man(Mortal)\lnot man( Mortal )

Fi­nally, the last step to ob­tain the empty clause is to unify (5) with (1), with Mortal=SocratesMortal = Socrates. That is how Pro­log works.

There are many ex­ten­sions to Pro­log’s par­a­digm, such as a prob­a­bilis­tic ver­sion.

While nat­ural lan­guage can eas­ily be trans­lated by a LLM to Pro­log, the lim­i­ta­tion of Pro­log’s al­go­rithm is that it must be able to in­stan­ti­ate a so­lu­tion. What about harder prob­lems? We will need sharper tools.

Solvers

Let us sup­pose we are build­ing a wall (us­ing just brick, with­out binder). The fore­man shows you a brick and tells you:

As the bricks’ sides and the wall length are in­te­gers, I thought they would fit per­fectly. I have tried to lay the bricks on all their sides, but every time, the last brick was 1 (cm/​inch/​ar­bi­trary unit) too long! It would not have hap­pened if the wall was shorter. Any­way, send the in­voice for the wal­l’s length.

And then, he left. Sud­denly, you re­alise: he did not tell you the length of the wall! It seems to be one of those im­pos­si­ble math­e­mat­i­cal prob­lems: “from the ra­dius of the ring, de­duce the date of the en­gage­ment”. The math­e­mati­cian wiz­ards among our read­ers may sug­gest the Chi­nese re­main­der the­o­rem, but for those among us who are mere mor­tals, we might turn to a solver; here, Z3.

Z3 uses a strange syn­tax: the op­er­a­tor is al­ways first, sur­rounded by paren­the­ses. For ex­am­ple, in­stead of writ­ing a<b+ca \lt b + c, you would write ( < a ( + ( b c ) ) ). Apart from this Lisp-y ec­cen­tric­ity, the lan­guage is closer to our day-to-day pro­gram­ming than Pro­log.

Let us now study the brick he showed you. It has three dif­fer­ent edges, and there­fore, lengths:

In our ex­am­ple, the lengths are 3, 5, and 7. There­fore, the re­main­der of the di­vi­sion (mod) of the wall length by the brick’s sides (3, 5, and 7) is 2, 4, and 6. Let us write down all of this.

Run­ning this will down­load 33MB of com­pressed de­pen­den­cies. sat means that there is a so­lu­tion.

Suc­cess: the length is 104! You can ver­ify it by your­self. Z3 does not (al­ways) need to in­stan­ti­ate a model, and uses much more pow­er­ful heuris­tics than Pro­log’s al­go­rithm. The down­side is that it can lose it­self eas­ily, and needs guid­ance. More gen­er­ally, keep in mind that you can­not:

  • Prove re­cur­sive facts.
  • Gen­er­ate what might be func­tion, in gen­eral. For ex­am­ple, gen­er­at­ing an in­fi­nite ar­ray is anal­o­gous to a func­tion, since it is a map from in­dices to val­ues.
Un­sat­is­fi­able cores

Solvers are al­most never used out­side in­dus­trial con­texts, but they can be use­ful even for mun­dane logic. Let us con­sider the cur­rent plat­form, Comment.

A deeper study of its prin­ci­ples can be found 🗺️here.

Let us con­sider the fol­low­ing tasks:

To do
To do

Imag­ine that the user has two lists open: one with the tasks to do (con­tain­ing them), and an­other with the tasks done (cur­rently empty). In func­tional UI frame­works like Re­act, they are both a sim­ple filter on the list of tasks.

Then, he drags both tasks to the list of tasks done. What should hap­pen in the data­base?

  • In­tu­itively, it would change their sta­tus from ToDo to Done.
  • How­ever, since the flow of data is uni­di­rec­tional, we must leave the de­clar­a­tive func­tional par­a­digm and write an im­per­a­tive han­dler: “when new tasks are dropped, find the con­flict­ing tags (here, the status), and re­place them by the tags of their new con­text (status = Done).”

Most apps are full of this du­pli­cated logic: once in the view, and once in the han­dlers. Can we avoid it? Here is the ex­am­ple with Z3:

The re­sult is con­sis­tent with what we want to be dis­played. How­ever, since the user dropped those tasks in the tasks done, let us add these con­tra­dic­tory as­ser­tions:

Z3 ob­vi­ously finds that there is a con­tra­dic­tion: unsat. How­ever, thanks to the op­tion :produce-unsat-cores true, it also shows the as­ser­tions caus­ing it: t1Status and t2Status. t1Name and t2Name can be kept.

There­fore, we can take the same code, re­move t1Status and t2Status, and ask Z3 to in­fer the right val­ues:

Aha: now, the sta­tus of these tasks is Done! We can up­date them in our data­base/​store. So, the steps were:

  1. De­fine the logic for sep­a­rat­ing the tasks among our lists
  2. As­sert that the lists are ac­tu­ally what the user has done
  3. Up­date the con­flict­ing records fol­low­ing Z3’s analy­sis

There­fore, we can avoid writ­ing the han­dler by hand: Z3 will in­fer the in­tent of the user!

Other uses

Ad­di­tion­ally, I be­lieve logic pro­gram­ming and solver can be used for col­lab­o­ra­tion:

  • Mas­ter­pieces are the fruit of unique minds be­cause they need a co­her­ent de­sign. How­ever, by spec­i­fy­ing their vi­sion through con­straints, cre­ators can co-au­thor a work.
  • In pro­gram­ming, it un­locks a new kind of col­lab­o­ra­tive ar­chi­tec­ture, made of mod­ules:

Each mod­ule spec­i­fies a list of in­tents, and the solver en­sures they will be met. There­fore, the col­lab­o­ra­tor does not have to read the whole code­base. The solver can find the con­tra­dic­tions (when a fea­ture is up­dated, for ex­am­ple), and the up­dater must ac­knowl­edge it re­place­ment. Adding a mod­ule will be safer, since the solver will list the over­ruled be­hav­iours. To find the rel­e­vant mod­ules to up­date, LLMs and vec­tor data­bases will be use­ful.

Why are they not used more to­day, then? Un­for­tu­nately, per­for­mance is pro­hib­i­tive for many real-time uses.

Ex­plo­ration of the in­ter­sec­tion be­tween logic and day-to-day pro­gram­ming ranges from sim­ple ap­pli­ca­tions of a solver on top of an ex­ist­ing lan­guage to try­ing to com­bine them in a sin­gle lan­guage that han­dles re­cur­sion.