Saturday, December 4, 2010

Proof in Programming

This post is based on some discussions that have been happing in my Next-Gen Programming languages class. This follows on the thoughts of an earlier post relating to AI and automation. This article on the military using robots is also relevant.

http://www.nytimes.com/2010/11/28/science/28robot.html?_r=1

The basic argument in this is that software systems are going to increase in size and complexity and languages/tools need to do something to keep up with that. I am a strong proponent of static type checking in large part because it allows the compiler to prove that certain aspects of a program are correct. Without this you have to perform more testing at every step and because testing never gets complete coverage, you can never be as certain based as tests as you could be with type checking.

Type checking only does correctness proofs for a certain fraction of the program. It seems to me that in order to really trust software to drive our cars and shoot our guns, we need to have more of the systems correct to the level of proof. You can prove an algorithm correct by hand. It is tedious and painful and it certainly doesn't scale very far. However, proof can be fairly well automated in most places and my question is how far that idea could be pushed and what are the possible implications.

We are seeing more and more of type inference systems in languages. ML and family have had it for many years. Now it is in languages like Scala and even the next version of C++ (C++0x) has type inference in the mix. The idea here is that types are fairly algorithmic as well. The compiler can figure out types for things most of the time. Humans just have to step in at certain points to help out.

What if a language/programming system had a similar concept for proofs of correctness? The system automatically discovers as much as it can about what a function or some other grouping of code does. Every so often a human might step in to help out if the automatic system doesn't "get it". However, the human's overrides won't be allowed to be incorrect, just set constraints that make it so the proving agent can figure more out. That is what type inference systems do. The user can give a more constrained type for something to help out, but if they provide an incorrect type, they get an error.

This made me think of something I had learned about Modula-3. Modula-3 had safe and unsafe modules. The safe modules were garbage collected and had pointers that were like Java references. They were always safe and the language/system dealt with things. Of course, you can't do systems programming in Java and the engineers at Digital wanted to allow systems programming in Modula-3. Unsafe modules allowed pointer arithmetic and let you do the unsafe things you can do in C/C++. The idea was that you only used them when needed and did most of your programming in safe modules.

So consider extending this a step further to "proved" modules. These modules will only have code in them that has been proven to be "correct". In my head I picture something like hover-overs you get in newer editors when type inference is involved. The automatic proof system will tell you everything that it can about the code. The programmer would set constrains until the desired result was attained. Then the parts of that result you really cared about could be "locked in" in some way. That way, if you or someone else changed the code later in such a way that a desired outcome wasn't provable, you would get an error.

You would probably want the ability to move code around, but moving things down a step from proven to safe or safe to unsafe might trigger all types of errors/warning when dependent code was no longer safe or provable. The goal though is to make as much code as possible in a system proven correct and what can't be proven you want to be safe unless it absolutely has to be unsafe. Moving things up levels wouldn't be as challenging, you just have to clean it up a bit to match the requirements of the higher level of safety. Code calling on it would generally go unchanged.

This leads to a very interesting theoretical question, how much can you prove? What fraction of the large systems that we are interested in could go into a proven module? I don't know if we have a good answer for this yet, but it would be interesting to find out. My gut tells me that the answer is a lot and if that is true, this would make programming a much safer and more reliable process.

Now to something even more theoretical and esoteric. I have a certain conjecture that while human intelligence is computable, the things we consider artistic aren't. We consider them art in large part because we can't give an algorithmic description of how to do them so they are more individual and subjective. My new conjecture on this line is that intelligent behavior, while computable, isn't provable. In order to get intelligent behavior, you would need to include significant non-computable aspects. If this conjecture is true, then limiting development as much as possible to provable modules could also be what is required to prevent scenarios like Terminator or the Matrix. Granted, there are all types of ifs in this paragraph and it is nothing more than conjecture.

No comments:

Post a Comment