View Single Post
  #76  
Old October 5th 05, 01:17 PM
Neil Gould
external usenet poster
 
Posts: n/a
Default

Recently, Peter Duniho posted:

"Bob Noel" wrote in message
...
What we don't have is the ability to formally prove the correctness
of software.


We DO have the ability to prove "correct enough". That is, we have
engineering strategies designed to ensure correctness to some given
degree. These are the same techniques that were used for the space
shuttle computers (though, unfortunately, not for recent unmanned
space probes), and similar techniques are used for existing
automation in aviation.

It's true that we don't have mathematical proofs for correctness. Of
course, it's widely believed we may never be able to have that. But
physical engineering suffers from similar limitations, and it seems
to get by just fine. Theoretical design can always be undermined by
human implementation, but there is an idea of "good enough" in both
types of engineering. You simply design in assumptions of human
failure of implementation.

I don't see this as a fundamental barrier to pilotless airliners.

In the same vein, piloted airliners are "good enough". The number of
catastrophic losses are quite small in comparison to the number of
flights. There is no evidence that aircraft piloted by computer would fare
any better, much less signficantly better.

As I see it, the question isn't whether a computer can fly an airplane
from A to B, but whether it can handle the unanticipated problem
successfully. This amounts to being able to anticipate the opportunities
to fail, and the possibilities extend well beyond the ability to predict
them (the DARPA land XC example demonstrates that this may be an issue).
While computer-piloted aircraft may eventually be able to succeed "most of
the time", human-piloted aircraft have done so for quite some time. So, I
question the benefits of such an effort.

Neil