PDA

View Full Version : Is it time to stop adding features to soaring software? Is it timeto focus on reliability?


son_of_flubber
December 11th 12, 12:53 AM
We tend to focus on ease of use, function and support when we select a flavor soaring software. But what about reliability and robustness? It seems entirely possible to be mislead by erroneous output from a digital adviser. For example, the glider computer might erroneously advise that you have enough altitude to make an upwind transition of a ridge.

Many software programs will work flawlessly most of the time, only to fail when tested at a "boundary condition"; an unusual set of conditions exposes an underlying defect in the code.

1.Does anyone have any true life cases of bad information being provided by a digital assistant in the air?

2.Assuming that a piece of software works most of the time, having more users, using the software for more hours, under a greater variety of conditions, increases the possibility of finding a hidden defect. Once the defect manifests, it still has to be recognized and reported. How many people use each variety of gliding software? Would that correlate with robustness?

3.One of the drawbacks of adding features or fixing defects in a program is the 'rule of unforeseen consequences'. The new feature or bug fix might have the side effect of introducing new hidden defects. Every time a new version of software is released, the confidence level in that software should be reset or at least lowered. We often assume that a new version will be more reliable and that everything that used to work will still work. That is usually true when software goes from alpha to beta version, but mature software can suddenly be broken by a new release. Soaring software is edging into the zone of precarious maturity.

Now that a number of soaring programs have implemented a rather broad menu of features, I would love for a development team to stop introducing new features and instead focus on increasing the reliability and robustness of the software, and thereby increase the justifiable confidence in the software..

An open source software project like XCSoar is in a good position to do this, because the developers are only paid in kudos, glory, and self-satisfaction. (There is no revenue stream to maintain). There are proven techniques for finding hidden defects, for example 1)Code inspection 2)Functional testing 3)Exhaustive model-based automated testing.

Adding new features is sexy and fun while inspecting code and testing is the opposite. It's too bad really, because one day a hidden software defect is going to lead to a fatal pilot error.

December 11th 12, 01:12 AM
On Monday, December 10, 2012 4:53:16 PM UTC-8, son_of_flubber wrote:
> We tend to focus on ease of use, function and support when we select a flavor soaring software. But what about reliability and robustness? It seems entirely possible to be mislead by erroneous output from a digital adviser. For example, the glider computer might erroneously advise that you have enough altitude to make an upwind transition of a ridge.
>
>
>
> Many software programs will work flawlessly most of the time, only to fail when tested at a "boundary condition"; an unusual set of conditions exposes an underlying defect in the code.
>
>
>
> 1.Does anyone have any true life cases of bad information being provided by a digital assistant in the air?
>
>
>
> 2.Assuming that a piece of software works most of the time, having more users, using the software for more hours, under a greater variety of conditions, increases the possibility of finding a hidden defect. Once the defect manifests, it still has to be recognized and reported. How many people use each variety of gliding software? Would that correlate with robustness?
>
>
>
> 3.One of the drawbacks of adding features or fixing defects in a program is the 'rule of unforeseen consequences'. The new feature or bug fix might have the side effect of introducing new hidden defects. Every time a new version of software is released, the confidence level in that software should be reset or at least lowered. We often assume that a new version will be more reliable and that everything that used to work will still work. That is usually true when software goes from alpha to beta version, but mature software can suddenly be broken by a new release. Soaring software is edging into the zone of precarious maturity.
>
>
>
> Now that a number of soaring programs have implemented a rather broad menu of features, I would love for a development team to stop introducing new features and instead focus on increasing the reliability and robustness of the software, and thereby increase the justifiable confidence in the software.
>
>
>
> An open source software project like XCSoar is in a good position to do this, because the developers are only paid in kudos, glory, and self-satisfaction. (There is no revenue stream to maintain). There are proven techniques for finding hidden defects, for example 1)Code inspection 2)Functional testing 3)Exhaustive model-based automated testing.
>
>
>
> Adding new features is sexy and fun while inspecting code and testing is the opposite. It's too bad really, because one day a hidden software defect is going to lead to a fatal pilot error.

I'm still using XCSoar 5.2.4, because it has everything I NEED, and runs with Stone-Axe reliability on my iPAQ 3950, which is STILL the best display in sunlight I've seen yet.

AGL
December 11th 12, 02:05 AM
> I'm still using XCSoar 5.2.4, because it has everything I NEED, and runs with Stone-Axe reliability on my iPAQ 3950, which is STILL the best display in sunlight I've seen yet.

I'm stil using SoarPilot, usually with a Palm Tungsten "T" or sometimes with a Windows 5 device with a Palm emulator. It does everything I need. There is an active YahooGroup of users, that is becoming quieter all the time because no one seems to be able to come up with more improvement requests. Frank Paynter used this at one time, and I'm not sure what he uses now or what prompted a change. a list of what SoarPilot doesen't do would make for interesting reading.

The hardware is getting old, but is still very readable in sunlight. Nevertheless, I have installed XCSOAR and LK8000 on a PDA to play with in the car this winter. Here's the problem with that: It's going to take me a long time to become as throughly familiar with those, and even longer to trust them. I fear errors of use more than software errors.

Fortunately we seem to have passed the point where we think that free means inferior, but we still think that pretty means better.

So, until somthing substantial happens with screens, I'm sticking with what I've got.

Roel Baardman
December 11th 12, 08:15 AM
>An open source software project like XCSoar is in a good position to >do this, because the developers are only paid in kudos, glory, and >self-satisfaction. (There is no revenue stream to
maintain).
I think self-satisfaction is the primary stimulus for most open-source developers. This would also explain why they keep adding features: only when the developer himself encounters a serious
defect he will be truely motivated to fix it.

>There are proven techniques for finding hidden defects, for example
Some of these practices are performed by the XC Soar team as far as I know (from hanging out on their IRC channel).

>1)Code inspection
This is done extensively. I have seen _a lot_ of discussion regarding code quality, performance, architecture, etc.

>2)Functional testing
XC Soar does have unit tests, if that's what you mean.

>3)Exhaustive model-based automated testing.
As far as I know this is still mostly academic. When I graduated two years ago, a lot of research was still performed.
The tools I worked with mostly did code generation, using a (verified) model as the source. Now, when you already have an extensive codebase (like XCSoar), reverse-engineering that model is hard
I think.

December 11th 12, 10:42 AM
> An open source software project like XCSoar is in a good position
> to do this, because the developers are only paid in kudos, glory,
> and self-satisfaction. (There is no revenue stream to maintain).

I have great respect for people who do what we do in their free time. But I would dare to say that what you say above works exactly the other way around.

We're all using our free time in a way which makes sense and fun. Finding bugs, correcting them and even rewriting code just because once in the past we took some shortcuts and now we're seeing the unwanted effects is not fun.. Exactly because at Naviter we are dependent on the revenue stream we are much more motivated do re-think and re-do what we have done not-so-well in the past. Sometimes it comes at the expense of not adding as many new features as we (but not you ;) would have liked but it pays off in reliability of the output.

There is another price that you pay for adding more features. The amount of available settings and options becomes overwhelming. It often reduces the usability of the software for the average pilot even if it does raise it for the savvy ones.

We've been through that before and it's fun to see us go through this again.. It's amazing how much time you can spend on keeping only the really absolutely the most necessary and useful settings to keep the software exactly as useable as before but simple to operate. When you add new features and you're not sure exactly what parameters will work well it's easy to just add settings for these parameters. Great for the savvy but very easy to mis-interpret for the average.

You can follow the progress of reducing the amount of settings if you install SeeYou on your Android phone/tablet (and soon also an an iPhone/iPad). We're trying really hard to have the most minimalistic settings in order to keep the usability right at the top.

It's a time sinkhole though!

Cheers,
Andrej Kolar
--
glider pilots use
http://www.Naviter.com

On Tuesday, December 11, 2012 1:53:16 AM UTC+1, son_of_flubber wrote:
> We tend to focus on ease of use, function and support when we select a flavor soaring software. But what about reliability and robustness? It seems entirely possible to be mislead by erroneous output from a digital adviser. For example, the glider computer might erroneously advise that you have enough altitude to make an upwind transition of a ridge.
>
>
>
> Many software programs will work flawlessly most of the time, only to fail when tested at a "boundary condition"; an unusual set of conditions exposes an underlying defect in the code.
>
>
>
> 1.Does anyone have any true life cases of bad information being provided by a digital assistant in the air?
>
>
>
> 2.Assuming that a piece of software works most of the time, having more users, using the software for more hours, under a greater variety of conditions, increases the possibility of finding a hidden defect. Once the defect manifests, it still has to be recognized and reported. How many people use each variety of gliding software? Would that correlate with robustness?
>
>
>
> 3.One of the drawbacks of adding features or fixing defects in a program is the 'rule of unforeseen consequences'. The new feature or bug fix might have the side effect of introducing new hidden defects. Every time a new version of software is released, the confidence level in that software should be reset or at least lowered. We often assume that a new version will be more reliable and that everything that used to work will still work. That is usually true when software goes from alpha to beta version, but mature software can suddenly be broken by a new release. Soaring software is edging into the zone of precarious maturity.
>
>
>
> Now that a number of soaring programs have implemented a rather broad menu of features, I would love for a development team to stop introducing new features and instead focus on increasing the reliability and robustness of the software, and thereby increase the justifiable confidence in the software.
>
>
>
> An open source software project like XCSoar is in a good position to do this, because the developers are only paid in kudos, glory, and self-satisfaction. (There is no revenue stream to maintain). There are proven techniques for finding hidden defects, for example 1)Code inspection 2)Functional testing 3)Exhaustive model-based automated testing.
>
>
>
> Adding new features is sexy and fun while inspecting code and testing is the opposite. It's too bad really, because one day a hidden software defect is going to lead to a fatal pilot error.

December 11th 12, 11:44 AM
Don't be too hard on these developers. They enjoy building the code. They provide us the fruits of their labor "free of charge". To them the code is the challenge. LK8000 has reduced the CPU usage of my PNA from 20% down to 6% over the last few betas. The airspace analysis has become a "thing of beauty".

I just spent two hours studying some configuration options. I can see where some may not want to study a manual to better learn their software. In that case just don't upgrade to new versions. As the previous poster said, keep your old version that you have configured to your comfort.

I am amazed at what the shareware developers are doing for us. Keep us the good work guys!

Lane
XF

Max Kellermann[_2_]
December 11th 12, 12:27 PM
On Tuesday, December 11, 2012 1:53:16 AM UTC+1, son_of_flubber wrote:
> An open source software project like XCSoar is in a good position to do this, because the developers are only paid in kudos, glory, and self-satisfaction. (There is no revenue stream to maintain). There are proven techniques for finding hidden defects, for example 1)Code inspection 2)Functional testing 3)Exhaustive model-based automated testing.

As Roel already said, we have many unit tests already, but not enough, there is never enough. Since the first day I joined the XCSoar project as a developer, I have worked on separating out code to run in isolated unit tests. (Which, by the way, was the very reason the LK8000 was created: because the LK8000 developer thought this was a bad idea, and so he left us - LK8000 still doesn't have a single unit test.)

You are welcome to inspect the XCSoar source code and write more unit tests.. We would very much like to see more manpower put into it. Not because we think that XCSoar is in a bad shape; it's pretty stable and the code has become quite good over the years. But with more manpower, we could do so much more.

Don't talk about how others should or could do something, just do it yourself. Join our IRC channel and talk to us: http://www.xcsoar.org/discover/irc..html (#xcsoar on irc.freenode.net)

pcool
December 11th 12, 01:01 PM
We have 500 unit tests, Max. FIVE-HUNDREDS.
We have a precise alpha and beta phases, scheduling a software delivery
after 10-12 months, only when 500 people are quite confident everything is
ok.
Is it enough to have 500 people doing this work?
I ask because personally I dont trust unit tests, for a simple reason: they
are made to report only if a desired result is obtained or not.
And the real truth is that for most of all functions you cannot check all
"desired results" and thus you are not accomplishing any real auto test.

The reason LK was created is that I was going to spend more time discussing
things with you, than doing them by myself and my own.

I am not going to comment the ridicolous statements by software
manufacturers that dont have innovations in their products, and call this
lack of innovations "desire of simplicity" (De vulpe et uva).



paolo





"Max Kellermann" wrote in message
...

On Tuesday, December 11, 2012 1:53:16 AM UTC+1, son_of_flubber wrote:
> An open source software project like XCSoar is in a good position to do
> this, because the developers are only paid in kudos, glory, and
> self-satisfaction. (There is no revenue stream to maintain). There are
> proven techniques for finding hidden defects, for example 1)Code
> inspection 2)Functional testing 3)Exhaustive model-based automated
> testing.

As Roel already said, we have many unit tests already, but not enough, there
is never enough. Since the first day I joined the XCSoar project as a
developer, I have worked on separating out code to run in isolated unit
tests. (Which, by the way, was the very reason the LK8000 was created:
because the LK8000 developer thought this was a bad idea, and so he left
us - LK8000 still doesn't have a single unit test.)

You are welcome to inspect the XCSoar source code and write more unit tests.
We would very much like to see more manpower put into it. Not because we
think that XCSoar is in a bad shape; it's pretty stable and the code has
become quite good over the years. But with more manpower, we could do so
much more.

Don't talk about how others should or could do something, just do it
yourself. Join our IRC channel and talk to us:
http://www.xcsoar.org/discover/irc.html (#xcsoar on irc.freenode.net)

Tobias Bieniek
December 11th 12, 01:44 PM
> We're all using our free time in a way which makes sense and fun. Finding bugs, correcting them and even rewriting code just because once in the past we took some shortcuts and now we're seeing the unwanted effects is not fun.

Well... actually... I've been doing exactly that for three years on the XCSoar project now and let me tell you that this can be fun too. For me it was a learning experience that ultimately got me my current job and a few other things before that.

and @Paolo: why do you have unit tests if you don't even trust them?

Andy[_1_]
December 11th 12, 01:55 PM
On Dec 10, 5:53*pm, son_of_flubber > wrote:

> 1.Does anyone have any true life cases of bad information being provided by a digital assistant in the air?

Several, but the most glaring one is GlideNav's use of wind data. On
an out-and-return flight GNII will use the ground speed achieved on
the first leg to predict the performance and arrival time on the
second leg. Works fine with a light wind, but with a strong tail wind
on the first leg it will sucker you into going far too deep into a
turn area and maybe landing out.

I discussed this with the developer and found it isn't a bug but a
deliberate design choice. I have wondered if this design was carried
over to ClearNav.

Andy (GY)

Wallace Berry[_2_]
December 11th 12, 03:37 PM
In article >,
AGL > wrote:

> > I'm still using XCSoar 5.2.4, because it has everything I NEED, and runs
> > with Stone-Axe reliability on my iPAQ 3950, which is STILL the best display
> > in sunlight I've seen yet.
>
> I'm stil using SoarPilot, usually with a Palm Tungsten "T" or sometimes with
> a Windows 5 device with a Palm emulator. It does everything I need. There
> is an active YahooGroup of users, that is becoming quieter all the time
> because no one seems to be able to come up with more improvement requests.
> Frank Paynter used this at one time, and I'm not sure what he uses now or
> what prompted a change. a list of what SoarPilot doesen't do would make for
> interesting reading.
>
> The hardware is getting old, but is still very readable in sunlight.
> Nevertheless, I have installed XCSOAR and LK8000 on a PDA to play with in the
> car this winter. Here's the problem with that: It's going to take me a long
> time to become as throughly familiar with those, and even longer to trust
> them. I fear errors of use more than software errors.
>
> Fortunately we seem to have passed the point where we think that free means
> inferior, but we still think that pretty means better.
>
> So, until somthing substantial happens with screens, I'm sticking with what
> I've got.

Same here! I still find SoarPilot on an old Tungsten T to be the best
system for me. Simple, easy to configure, and much better sunlight
readability than anything else I have tried. In over 8 years of flying
with SoarPilot, I have had to do a reset in flight just once, and that
was because of a damaged connector. I tried LK8000 on a Mio Moov. Even
with the screen brightness hack could not see it well enough to be
usable.

I am very interested in efforts with the eInk Nook and hope they are
successful. My old Tungstens won't last forever...

--- news://freenews.netfront.net/ - complaints: ---

Richard Brisbourne[_2_]
December 11th 12, 03:59 PM
At 02:05 11 December 2012, AGL wrote:
>> I'm still using XCSoar 5.2.4, because it has
everything I NEED, and runs
>=
>with Stone-Axe reliability on my iPAQ 3950, which is
STILL the best
>display=
> in sunlight I've seen yet.
>
>I'm stil using SoarPilot, usually with a Palm
Tungsten "T" or sometimes
>wit=
>h a Windows 5 device with a Palm emulator. It does
everything I need.
>Th=
>ere is an active YahooGroup of users, that is
becoming quieter all the
>time=
> because no one seems to be able to come up with
more improvement
>requests.=
> Frank Paynter used this at one time, and I'm not
sure what he uses now
>or=
> what prompted a change. a list of what SoarPilot
doesen't do would make
>f=
>or interesting reading.
>
>The hardware is getting old, but is still very
readable in sunlight.
>Never=
>theless, I have installed XCSOAR and LK8000 on a
PDA to play with in the
>ca=
>r this winter. Here's the problem with that: It's
going to take me a
>long=
> time to become as throughly familiar with those,
and even longer to trust
>=
>them. I fear errors of use more than software
errors. =20
>
>Fortunately we seem to have passed the point
where we think that free
>means=
> inferior, but we still think that pretty means better.
>
>So, until somthing substantial happens with screens,
I'm sticking with
>what=
> I've got.
>

Having swapped my HP314 for a Vertica V1 a few
months ago, I reckon something substantial has
happened with screens. I can now manage LK8000
(data sourced from Flarm) comfortably in bright
sunlight wearing sunglasses for the first time. If you
can get a look at someone's Vertica, Glider Guider or
Oudie 2, you'll see what I mean.

Rather than put it in the car, I'd strongly recommend
playing with LK8000 at home either in sim mode or on
Condor if you have it before using it anywhere where
you need to give attention to something else. And
being ruthless about what features to disable.

pcool
December 11th 12, 04:18 PM
We made unit tests to check complicated stuff like OLC realtime
calculations, FAI triangle calculations and such, in the development phase.
But generally I called "unit tests" the people doing individual checking of
each beta versions, and the experience shew that you need at least 300 of
them for 3 months to be relatively sure everything is ok. This is why I have
brought the beta phase to almost 12 months.
One way or another, you still need beta testing because obvious problems are
easy to fix, while the nasty stuff is always obfuscated and for Murphy's
laws will pass all unit tests, because tests did not consider the problem
(otherwise, you would have fixed it already).
Best would be to have both, of course. Xcsoar and LK can have hundreds of
betatesters, and dozens of eyes checking at the code and spotting problems.
But in the end, people doing debugging are just a few around the world, for
both projects. You can count people doing this work on xcsoar and lk8000
with fingers of one hand.




"Tobias Bieniek" wrote in message
...

> We're all using our free time in a way which makes sense and fun. Finding
> bugs, correcting them and even rewriting code just because once in the
> past we took some shortcuts and now we're seeing the unwanted effects is
> not fun.

Well... actually... I've been doing exactly that for three years on the
XCSoar project now and let me tell you that this can be fun too. For me it
was a learning experience that ultimately got me my current job and a few
other things before that.

and @Paolo: why do you have unit tests if you don't even trust them?

Max Kellermann[_2_]
December 11th 12, 05:01 PM
On Tuesday, December 11, 2012 5:18:29 PM UTC+1, pcool wrote:
> We made unit tests to check complicated stuff like OLC realtime
> calculations, FAI triangle calculations and such, in the development phase.

Your use of the plural "tests" implies that there is more than one. However, that's an exaggeration, there's only one program (TestContest), and it's not even a unit test.

> But generally I called "unit tests" the people doing individual checking of
> each beta versions, and the experience shew that you need at least 300 of
> them for 3 months to be relatively sure everything is ok. This is why I have
> brought the beta phase to almost 12 months.

People are not unit tests. I think you misunderstand the meaning of the word "unit test", which is what this thread is about. You dismiss them as "useless" which you know too little about.

> One way or another, you still need beta testing because obvious problems are
> easy to fix, while the nasty stuff is always obfuscated and for Murphy's
> laws will pass all unit tests, because tests did not consider the problem
> (otherwise, you would have fixed it already).

Not quite. We XCSoar developers fix a lot of bugs that are found by unit tests. By the time new code gets published, these bugs are fixed already. Unit tests help a lot during development, and save a lot of time.

Just look how many bugs you had to fix last week, that would not have happened with unit tests.

> You can count people doing this work on xcsoar and lk8000
> with fingers of one hand.

Hm. 4,638 pilots have installed XCSoar 6.5 preview releases on Android alone (number of unique Google accounts, no duplicates). The stable 6.4 version has been installed on Android by 22,005 pilots. Not counting all those people on Linux, Windows, WinCE, Mac OS X. Our bug tracker has 415 user accounts and 2,400 bug reports in the past 3 years. Lots of eyes, lots of bugs & bug fixes!

What makes me wonder is why you rejected the bug fixes I sent you today: https://github.com/LK8000/LK8000/pull/307

pcool
December 11th 12, 05:43 PM
You wont find on github the test procedures for Contest and FAI, the latest
I remember. The contest test you mention is not the one we used.
In either cases I did not make them.
However this is not the point. I agree that having internal tests is better
than not having them! Of course.
Our 289 internal checks made with assertions can help, and did help, but
cannot be compared to your unit tests.
Honestly I cannot judge your code because I dont know it at all, but I am
sure it is well thought for this part as well.

You know the reason why I dont merge your code already, and it is not worth
discussing it here for a simple reason, which I think you agree on.
Some software manufacturers are just upset, to use a minimalistic word, by
the fact free software is now at a quality standpoint that is making a real
alternative to commercial products. Having one free software is already a
pain, having two is simply killing someone business.
It looks pretty funny to them, and not only , to read yours and mine
argumentations about how good or how bad one software is.



"Max Kellermann" wrote in message
...

On Tuesday, December 11, 2012 5:18:29 PM UTC+1, pcool wrote:
> We made unit tests to check complicated stuff like OLC realtime
> calculations, FAI triangle calculations and such, in the development
> phase.

Your use of the plural "tests" implies that there is more than one. However,
that's an exaggeration, there's only one program (TestContest), and it's not
even a unit test.

> But generally I called "unit tests" the people doing individual checking
> of
> each beta versions, and the experience shew that you need at least 300 of
> them for 3 months to be relatively sure everything is ok. This is why I
> have
> brought the beta phase to almost 12 months.

People are not unit tests. I think you misunderstand the meaning of the word
"unit test", which is what this thread is about. You dismiss them as
"useless" which you know too little about.

> One way or another, you still need beta testing because obvious problems
> are
> easy to fix, while the nasty stuff is always obfuscated and for Murphy's
> laws will pass all unit tests, because tests did not consider the problem
> (otherwise, you would have fixed it already).

Not quite. We XCSoar developers fix a lot of bugs that are found by unit
tests. By the time new code gets published, these bugs are fixed already.
Unit tests help a lot during development, and save a lot of time.

Just look how many bugs you had to fix last week, that would not have
happened with unit tests.

> You can count people doing this work on xcsoar and lk8000
> with fingers of one hand.

Hm. 4,638 pilots have installed XCSoar 6.5 preview releases on Android alone
(number of unique Google accounts, no duplicates). The stable 6.4 version
has been installed on Android by 22,005 pilots. Not counting all those
people on Linux, Windows, WinCE, Mac OS X. Our bug tracker has 415 user
accounts and 2,400 bug reports in the past 3 years. Lots of eyes, lots of
bugs & bug fixes!

What makes me wonder is why you rejected the bug fixes I sent you today:
https://github.com/LK8000/LK8000/pull/307

Martin Gregorie[_5_]
December 11th 12, 11:04 PM
On Tue, 11 Dec 2012 09:37:35 -0600, Wallace Berry wrote:

> Same here! I still find SoarPilot on an old Tungsten T to be the best
> system for me. Simple, easy to configure, and much better sunlight
> readability than anything else I have tried. In over 8 years of flying
> with SoarPilot, I have had to do a reset in flight just once, and that
> was because of a damaged connector. I tried LK8000 on a Mio Moov. Even
> with the screen brightness hack could not see it well enough to be
> usable.
>
I have similar problems, but I've found that turning terrain off and
setting the background map colour to white helps a lot. Then, you find
that the LK8000 overlay numbers are hard to read because they're white
with black outlines. So, set the overlay text colour to white and check
the 'inverse colours' box and now you have solid black letters on a
mostly white map (or you could just use something like dark blue for the
text).

Of course, I mainly fly in flat parts of the UK, so if you fly where most
of the land is standing on end and terrain shading is vital this may not
be a great solution.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |

Wallace Berry[_2_]
December 12th 12, 07:53 PM
In article >,
Martin Gregorie > wrote:

> On Tue, 11 Dec 2012 09:37:35 -0600, Wallace Berry wrote:
>
> > Same here! I still find SoarPilot on an old Tungsten T to be the best
> > system for me. Simple, easy to configure, and much better sunlight
> > readability than anything else I have tried. In over 8 years of flying
> > with SoarPilot, I have had to do a reset in flight just once, and that
> > was because of a damaged connector. I tried LK8000 on a Mio Moov. Even
> > with the screen brightness hack could not see it well enough to be
> > usable.
> >
> I have similar problems, but I've found that turning terrain off and
> setting the background map colour to white helps a lot. Then, you find
> that the LK8000 overlay numbers are hard to read because they're white
> with black outlines. So, set the overlay text colour to white and check
> the 'inverse colours' box and now you have solid black letters on a
> mostly white map (or you could just use something like dark blue for the
> text).
>
> Of course, I mainly fly in flat parts of the UK, so if you fly where most
> of the land is standing on end and terrain shading is vital this may not
> be a great solution.

Thanks, Martin. I'll give that a try. Other than the readability issue,
a PNA and LK8000 seems like a very nice self-contained system.

--- news://freenews.netfront.net/ - complaints: ---

Google