OT Win7 updates

Even my Windows boxes have enough Cygwin tools it's sometimes hard to tell the difference. Except for the damn backslash. That's right up there with on my shit list.

Reply to
rbowman
Loading thread data ...

We're already getting requests for the Senior Citizen GUI style. Dispatchers aren't getting any younger either.

Reply to
rbowman

I open an X server and talk to a UN*X box -- there's always at least one running. I've got "shortcuts" set up to open telnet, ssh, ftp, etc. sessions on each of my hosts for each target. I use cartoon character names for host names so I just use little images of those characters as the icons for the machines. Too many things are hard to do in Windows.

Then there's the Slowaris boxes that manage to make things even more "interesting".

I've had to resort to all sorts of little tricks to give me clues as to what I'm talking to -- a Slowaris console can look an awful lot like a NetBSD console or a DOS box.

And, the virtual desktops in Windows need to be differentiated from the virtual desktops under CDE or NetBSD's native X.

It's easy to get confused and start typing on the wrong keyboard into the wrong window to the wrong host. Then, wondering why you're not seeing the results you expected! :-/

Reply to
Don Y

Good luck! Hearing loss, poor visual acuity (if not outright blindness), more easily confused, essential/Parkinsonian tremor, etc. At times, I think it would be easier to support all the languages of the world!

Reply to
Don Y

I have lots of language references -- SNOBOL, MUMPS, LISP (et al.), Limbo, PL/1, Modula2, Ada, C, C++, tcl, FORTRAN, Icon, etc. I spent a fair bit of time surveying them when I designed the scripting language for this project. If users can be blind, mobility impaired, etc. then surely the scripting language should address those same issues! Look at your code and imagine how a blind person would "view" it (spoken or braille). Or, how many keys a physically disabled person would have to hunt-and-peck to create/edit it. *Then*, think about how they would "do stuff" with it...

I originally tried to wrap my objects in C++ wrappers. But, found it only made things harder to understand.

E.g., a "handle" for an object isn't a pointer. There's no way to dereference it in any language. Nor is it possible to determine if two object handles reference the same "physical" object (a handle also includes access capabilities so you might be able to perform operations A and B on an object through *this* handle; but C and D on the same object through THAT handle).

Where I really miss the C++ notational syntax ("sugar") is for mathematical operations with nonstandard data types. E.g., in my scripting language, everything is a BigRational. It would be *much* nicer to be able to say: BigRational Amps, Volts, Resistance; ... Amps = Volts / Resistance; than to have to introduce function calls for each operator.

I like long, descriptive names. E.g., divisor, dividend, quotient. "a" and "b" just don't cut it. Even iterators get names instead of generic i, j, k. I think row++ means more than i++ or even r++. You only have to type the identifiers when you're writing the code; they don't influence performance.

I want to be able to subvocalize as I'm reading the code; so, "row++" is "next row" whereas "i++" would be...?

This is important when "reading" code to a blind user -- how would you pronounce "rw++"?

MULTICS was written in PL/1 and was a reasonably robust product. When you think about what technology (hardware and software) was like

40+ years ago, it's a wonder ANYTHING worked! Far more "art" than "science"

I looked at Ada as an option to enhance robustness. But, came away with the impression that "discipline" is just as good. Ada just tries to ENFORCE discipline.

I don't like anything "proprietary". I want to be able to move to a different processor in a heartbeat -- and not have to worry that my code is not portable.

This even extends to compiler pragmas, data types, etc. "unit16_t" not "short". I routinely test how well I'm doing by running my code through a variety of compilers (gcc, borland and sun's). I have test suites so I can look at the results and wonder why something isn't as I'd expect and then tweek the code accordingly.

Reply to
Don Y

I still use i although the array or whatever I'm iterating gets a descriptive name. We had one programmer who thought anything more that ary was too much typing. There can be too much of a good thing though:

gtk_toggle_button_set_state()

The deeper you get into gtk the longer the function names get. And they all start with gtk_. It was an admirable effort not to pollute the namespace just in case some else had a toggle_button_set_state() but even tab completion gets tedious.

Reply to
rbowman

One of the QA people has hearing aids that he sometimes turns off. One of our 'features' is the ability to play various wave files for alerts. The feature is almost universally loathed and turned off but he's perfectly happy with the whoops and whistles going off.

That's not the only 'feature' that's been requested by our users where I'm thinking it's really going to be annoying as I implement it. Most of the time it's a trivial project and it's easier to give them what they asked for and let them figure out it wasn't a good idea.

Reply to
rbowman

Presumably, he's listening to this through a SPEAKER and not HEADPHONES? I.e., so the solution to his "problem" becomes everyone else's problem!

My user sound system supports three "information channels":

- background (akin to something you hear when you're not focused on something more important; sort of like background music, background surroundings, etc.)

- foreground (gee, clever name, eh? this is something that you are focused on; the primary channel that has your attention, overriding the background in terms of importance -- but playing alongside/atop it)

- alerts/interrupts (yet another clever name? these are distractions or interruptions that are competing for your attention with the foreground; terse "events" "off to the side", so to speak)

As users can have different hearing acuity in each ear, the user (headphones) can adjust the "balance" of the background channel. But, it's basically a single, monaural "information source".

Similarly, the balance of the foreground channel can be adjusted to "position" it in the "center" (whatever that means for this user) of their aural field -- even if that is distinctly OFF center.

The alerts are characterized by their individual sounds (bell, buzzer, chime, etc. -- akin to your wav files). But, they can also be positioned in the aural "space". So, a chime can sound "up to the left" that indicates an incoming phone call. Another buzzer might sound "off to the right" to remind you that you have something "scheduled" for this time. A mallard's quack might sound straight ahead to alert you to someone's presence at the front door. etc.

Each "interrupt" is essentially asking you, "do you want to shift your attention away from whatever the foreground channel might be saying to you, in order to address the event that 'it' is trying to alert you?" Do you want to "shift your focus" to some other competing information channel?

Note that you might not be able to "pause" your foreground "dialog" (imagine you're talking to someone on that channel). And, you don't want the interruption to "persist" as it would require conscious action to silence it -- while you are presumably FOCUSED on something else! The alerts want to be almost percussive in nature -- so they slip in between words, conceptually.

But, when they are that brief and UNEXPECTED, it might be difficult to remember what you heard -- was that a beep or a bop? Hence the value of being able to place them spatially; you can remember where they came from and use that as a clue to their likely intent (or, even categorize them: things from the left are important to me; things from the right, not so much).

The toughest part of all this is figuring out how to let the user define these "alerts" in a manner that makes sense to *him*. You don't want to force every user to memorize a dozen different possible alerts. Nor do you want to force "adept" users to have to query the device to clarify what some "generic" alert actually meant. The more you can "customize", the harder the system becomes to use for the typical user (hence the need to design smart defaults). In the case of the alerts, the user has to assign a particular sound

*and* a particular location in space. In such a way that he can later associate meaning with that combination.
Reply to
Don Y

It depends on what that array/structure references.

E.g., if I have a bag of items, then it might be "count". Or, if it is a list of people, it might be "person". etc.

Yes. One other "loss" by not using C++ was the support for namespaces. I had asked "others" for details about which C++ features dragged in "unseen overhead" in the hope of being able to use just those features without the "invisible other cruft" (default constructors, RTTI, exceptions, etc.). But, noticed folks (who were normally quick to offer "opinions") were awfully silent or tentative in their answers. I figured something that fundamental SHOULD have a simple, obvious answer and the fact that it couldn't produce one (that would apply regardless of compiler choice) was an "early warning" that surprises lie ahead!

Reply to
Don Y

Yep. That is the problem behind the concept. Typically the dispatchers have a hands-free headset and aren't plugged into the computer. There are visual alerts too but presumably a color blind person might miss a line turning red. Aural cues are fine if you can isolate them to an individual. The same would go for speech to text interfaces like Dragon or the new gadgets. The last thing you need in an open office type environment is someone chatting with Siri or Cortana.

Reply to
rbowman

So, no one *should* hear the chirps, then? Or, are the headsets acoustically transparent? (Or, are the old coots simply DEAF and have the volume turned up so high that you can hear "through" them regardless??)

Add a second channel to communicate the "redness" (whatever red signifies). E.g., blinking, inverting video, etc. About 7% of men are color blind. (and most of the rest of us only had 8 crayons in our crayola box! :> )

In my case, I use a BT earpiece -- it gives me output and input capability. It also lets me find where the user is located in the environment without burdening him/her with lots of OTHER equipment; the earpiece is "required" to communicate so users understand the need for it and can rationalize wearing it moreso than a device that tells the system where they are located.

[The problem is that earpieces are just one ear. I've been looking for a similarly lightweight stereo headset (with microphone) but haven't found one that I like -- yet.]

Yes. I support a gestural interface alongside speech in my "non-visual" user interface -- as there are times when talking aloud would be disturbing to others (e.g., in a corporate boardroom, in a class, etc.) AND times when you don't want others to know what you are asking the system to do for you! (e.g., what's Mr. Davis' first name? I really would not like to have to ASK him to remind me of it while I'm chatting with him! That would be pretty embarassing! OTOH, if I can figure it out *before* we part ways, I can offer it up in my farewell wishes: "Nice talking to you,

*BOB*!" And, I can add a reference to his wife/kids: "Say 'hello' to Liz for me!")
Reply to
Don Y

The one thing I like which eventually made it into the language via the STL are containers. Those were offset by the whole iostream fiasco which I consider to be complex, confusing, and inefficient.

My problem is I can see through the syntactic sugar. try/catch is clean and elegant but if there isn't one of Djikstra's dreaded goto's lurking in there I'll eat it. That's after you punch through a few vtables to figure out where you're going to.

Lambdas also seemed much ado about nothing but it's getting so no language has arrived unless it has lambdas.

Reply to
rbowman

Containers were never a problem for me; I'm a big fan of pointers so handling containers of any "type" was just a matter of juggling lists of pointers. I rely on "hacks" under C to give me features that I want typically by exploiting pointers. E.g., I use the "object->method" syntax by converting my "handles" into struct instances (more like C with classes). This highlights the relationship of "method" to "object" as well as helping out with the namespace issue (e.g., each object has a "constructor" of sorts so I want a "new" method to let me instantiate another similar -- or identical -- object based on the object I have in my hand, at the time)

This is the heart of the problem C++ poses for me over C; I can *see* what C will be doing and get a good feel for how many opcodes will be required (for a given processor). As you add all the behind the scenes magic in C++, things get a bit murkier: *where* are those opcode fetches going to occur? will they result in cache misses? what about the pointers that it will be chasing down -- how much time repriming the cache... JUST for that one object in this expression!

Years ago, with smaller (embedded) processors, you didn't worry about this cuz there wasn't much of "acceleration" built into the processor. Now, though, even small caches are commonplace and can make a noticeable difference in performance. Silly to throw that away due to poor locality of reference!

And everyone wants to come up with their own extensions to these (along with everything else in the language).

There is a point where a language gets too damn complex and ambiguous. Why not program in *English*? That's wonderfully UNambiguous (NOT!), right?

The more complex the language, the fewer "expert" practitioners. And, the greater the tendency towards zealotry: people have INVESTED a lot and feel some need to JUSTIFY that investment -- the language (or any other technology) colors their future decisions in ways of which that they might not be aware. (hammer, nail)

A "project" can be "too complex" -- and a *solution* can be too complex!

Complexity wins only when it can be hidden. If you have to be aware of the complexity FOR ANY REASON then it starts to become counterproductive.

My RTOS's API is pretty fat. Fat suggests complex. But, much of my complexity manifests in very consistent ways -- i.e., a common set of arguments for most function calls (instead of special function calls for different sets of arguments). A bit more typing "at development time" but a lot less to have to remember (special cases, etc.).

[One of the features of C++ that I did NOT like was "optional, *default* arguments"; too easy to forget they are there and lose out on the functionality they provide!]

Sunday lunch -- finestkind!

Reply to
Don Y

formatting link

Most only cover one ear. Our support people use them and I can coach them when they're on the phone with a client. Personally I would have a problem with them. My hearing is fine but I've always had trouble separating the signal from the noise. I'm the guy on the telephone with my finger stuck in my free ear or having trouble following a conversation in a noisy restaurant. I can function without distraction in noisy environments but that consists of tuning it all out.

Reply to
rbowman

Yeah, that's the problem: I need something binaural for the spatializer.

So, why do folks complain about the beeps and bops? Or, is it just an issue when you are *developing* the codebase (getting tired of hearing all those noises just to verify that your code is behaving as it should)?

Stub the routine to flash a message instead of playing a WAV? printf("The sound you would now be hearing is " + filename + "\n");

Understood. That's the reason my aural interface just has the three channels and uses them the way it does. Many people can't handle multiple competing sound sources -- esp if they are directed AT them (not "dismissable" at some subconscious level -- cocktail party effect)

E.g., I find many social events stressful because several people will try to talk to you at once -- different subjects/conversations -- each oblivious to the fact that you're engaged in another conversation at the moment. "Rude" to ignore any of them so I struggle to try to *hear* each of them even as they talk over each other.

[I think in quieter environments, folks can more readily see that you're engaged in another conversation and defer their comments until an appropriate lull. OTOH, in crowded rooms, most folks can barely hear themselves THINK, let alone hear who might be talking to YOU!]
Reply to
Don Y

Yes, in house it's when the QA people are testing the functionality but I would not be surprised to find the wav files disabled on site although it would depend on the personnel. Working a dispatch center tends to be stressful and I don't see adding additional annoyances as a good thing. otoh, because it is stressful turnover is high and some people may need all the prompting they can get.

I'd never make it. "911, what is your problem?" --- "Oh, really? Your problem is you're dumber than a box of rocks. Good bye."

This really does happen occasionally. The Gestapo has nothing on a gaggle of concerned citizens with cellphones and 911 on speed dial. I have to hand it to the people that can handle the idiots with polite professionalism as still switch gears when the shit is really hitting the fan. It's like Russian roulette when you pick up the line.

Reply to
rbowman

Yup. I have the same attitude towards "support" (some really challenging users out there! :< )

HoA Nazi's

Neighbor behind me did a stint as a dispatcher after retiring. Cop-across-the-street's wife, likewise. Neither seemed to stay with it for very long!

Reply to
Don Y

You can adapt. You can still write code under linux, it's just a little different if you want it to be native for linux. Otherwise, vm, wine, dosbox (depending on what you're programming) are all available to you to continue just like you were doing under windows/dos.

Linux is worth the time to learn...imho.

Reply to
Diesel

I had an older HP deskjet. I just plugged it into a free USB port, within seconds, it was available for use. Had no problems printing from libre office. I have several linux machines and several Windows based machines. They share files back and forth just fine...Samba and gvfs. Took little work to configure them. If you can configure a small adhoc windows network to share folders across them, you can do the same with a linux box...

As far as ACL hassle...

formatting link

That's one way to look at it, certainly. Another is that you have more choice and more options.

Reply to
Diesel

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.