Then and now

...and will be completely controlled, lock, stock, and salary, by Obamacare.

Reply to
krw
Loading thread data ...

For the better paying jobs, the PharmD is really becoming the entry level degree.

Reply to
Kurt Ullman

yes I am thinking a pharmacist can be an old man job.... that is work part time even into your 80s and still make decent money

yes?

Reply to
me

On Fri, 31 Dec 2010 09:37:58 -0600, snipped-for-privacy@privacy.net wrote Re Re: Then and now:

Maybe. It depends on the employment policies of the big drug outlet that you work for: CVS, RiteAid, WalMart, Target, etc.

Pharmacists are becoming a dime a dozen and the big drug outlets have pretty much put the local mom/pop pharmacy out of business.

When you work for one of the big outlets, you're just another employee.

Reply to
Caesar Romano

Vic Smith wrote: ...

By what measure? Unless it is in a nonprofit or academic setting, one could say that _everything_ is business related since that's what companies are about.

OTOH, if one is talking of either LOC or other measures of actual code generation, I'd venture direct financial or business applications still remain a subset.

...

What one really shouldn't do is to make rash assertions w/o some basic knowledge of the area at all.

"Theoretically possible" doesn't come even close to equating to "easily". I would venture that you're not even aware that there is still active development of the Fortran Standard and modern features continually being added to the language that include object-oriented features, etc., etc., etc., ...

What Fortran has that none of the alternatives you've mentioned have is support for vector processing and other features suited to massive computing that are simply not amenable to the models you've outlined nor are there suitable compilers for much of the hardware used in those computing environments of the type. If you were to take and try to accomplish the computation in a fashion as you've outlined above, besides taking orders of magnitude longer to develop you'd find that it wouldn't have adequate performance to be of any use in the end, anyway.

--

Reply to
dpb

You're probably getting by fairly cheaply on food compared to in the past. A couple tables here from the United States Department of Agriculture if you're interested:

formatting link
One more here using 1988 as the baseline:
formatting link

Reply to
Dean Hoffman

On Fri, 31 Dec 2010 13:34:23 -0600, dpb wrote:

By "business" I mean business transaction processing, and the "non-scientific" internal business needs such as accounting, planning, projections, inventory, payrolls, etc. It would take a library books to lay them all out, and though I've run across many of them, I'm far from an expert. The only measure I used, and the only one I know of, is demand for programmers and analysts, in my experience only. I spent about 15 years with CGA and CTG, big programmer body shops. We also did project work, but it was mostly providing bodies. And I knew folks in other big outfits, like EMS and smaller. Sometimes I was in management and a lot of reqs went through my hands. I say "sometimes" because I always worked as a programmer or analyst at a client site, but had "management" responsibilities as a field rep and moved around quite a bit. Suffice it to I never saw a req for a Fortran programmer. Not saying there never was one. I left that to take a staff position with a client about the time we were getting reqs for C ++, late '90's. Of course I'm near Chicago, with a lot of business corporation HQs and processing centers, so I understand my view is skewed. That's why I said "guessing." Bell Labs and Argonne are nearby, but I don't recall ever serving them. My brother worked at Bell Labs and told me the language he was using for switch design was proprietary - don't remember what it was called. He's an egghead with various engineering and CS Masters degrees. I suppose engineering types like my brother and son just use whatever language serves their math needs. Business uses relatively simple math and is mostly processing simple but massive amounts of transactions. Anyway, I knew all the big business programming shops in this area and the approximate size of their programming staffs. It was a massive number. I'm sure Silicone Valley, NASA and others have quite different overall needs. Just going by my experience, that's all. Anybody else can relate theirs.

I don't know. Like I said, my only measure is bodies doing programming. LOC is meaningless. I can code a "Hello World" in one line or a thousand. What I do know is every time I make a move with my CC or bank or cell phone that movement gets processed by millions of lines of code by the many business applications I mentioned. OTOH I'm a gamer, and my games use what I assume is advanced vector processing, and I probably execute more instructions playing games than my financial transactions create. I sometimes read the "programming staff" scroll after a game. Not many programmers at all. All kinds of ways of looking at it.

Ok, I take "easily" back. Yes, I was aware that Fortran continues to add to the package. Also know that there has been some implementation of COBOL OOB. And that there are still plenty of RPG AS/400 apps out there. I really thought RPG would be gone by now. I coded some, and found it atrocious using a transparent plastic template as an aid in coding. But it's still with us. Then again I was always happy that I would be out of IT before Y2K rolled around. I was wrong on that projection too.

I googled a bit and it looks like most video games - which use vector processing - use C ++. Also new hardware architecture provides vector support. This is all above my pay grade. I don't care if Fortran guys argue with C++ guys. You can google those arguments. Very few people care that Fortran might crunch the numbers you need in

10 minutes while C ++ takes 10 minutes and 6 second - or 12 minutes. Usually just accurate results with available programming skills win the day. I dealt with relatively simple but massive business apps. I found long ago that unless it affects the work at hand it's pointless to argue about language.

Since hardware speed improvement allows what is commonly called "bloatware," the language's user interface and maintainability is often more important than efficiency for most processing. Otherwise most code running would be BAL or its platform equivalent. I heard of one large business CICS app in my area which was converted from BAL to COBOL, then they had to reinstall the BAL app because the COBOL was too slow. Don't have the details. Might have been all BS propagated by an assembler fan.

On my first IT job I was chastised by the boss - an old BAL programmer

- for putting the open/close files routine out of the way at the bottom of the program. They were running an IBM 370 model with VM. Just did what I was taught in college - structured instead of top-down. He said it might cause paging in the CPU memory segment. I tested it both way, and there was no difference in CPU utilization time, but I didn't tell him, just followed his suggestion. His criticism was useful though, because after that I looked hard at anything that ran long, and earned some kudos for it. Won't bore you with details, and you probably know that CPUs are frequently overtaxed by bad coding or flawed processes. That first shop still preferred you eyeball - desk check - your code for syntax/spelling errors instead of running it through the compiler. Got your knuckles rapped for too many compiles. Really backward in that respect. They had plenty of juice. But most of management and programmers had teethed on 360's.

Things change. In the business world I went through the transitions from BAL to COBOL, ISAM/VSAM files to IMS and DB2, then OOB and eventually SAP. There were many unmentioned languages in between. Now, except that SAP is the "big boy" commonly used at some outfits, and there are plenty of COBOL apps still running, I don't know anything. I'm retired. The headhunters even gave up on me. Do still have a mild interest in what's happening though and still in limited contact with the business. And I still have my prejudices and what I think are sensible views. One of those views is "use what works best for you." I don't want to argue this stuff, just chatting. Only know what I know. IT moves fast and has passed me by. And I never was involved on the "scientific" side of it. It was mostly just how I made a living. I was never highly technical or one to make up computer jokes. Just knew enough to be successful at it, and I'm happy at that. My real loves were always women and beer.

--Vic

Reply to
Vic Smith

Vic Smith wrote: ...

...

So, don't try to pontificate on what you know nothing about...

--

Reply to
dpb

Haven't heard you relate one iota of experience in the field of programming, or any numbers, so maybe you're not the best one to be telling me what to do or not do. In COBOL I would code that as FUCK-YOU in working storage.

--Vic

Reply to
Vic Smith

No you haven't heard.

40 years, BSNE, MSPhys(NucSci) 20 years eng'g code maintenance/development w/ various organizations beginning w/ nuclear power generation design codes on mainframes (Philco 2000 series followed w/ CDC 6600/7600/Cyber w/ FORTRAN 66 thru Fortran F77 in conjunction w/ Philco assembler and CDC Compass as required). From there to 20 years as consulting for various clients from field of robotics and man-replacement equipment and instrumentation for nuclear utilities to evolving to support in I&C R&D for fossil utilities.

Not that it's any of your business.

--

Reply to
dpb

Cool, I wish I knew as much about programming as you do but there's only so much room in my fat head. :-)

TDD

Reply to
The Daring Dufas

The Daring Dufas wrote: ...

I think so, too, any more... :) I returned to family farm when Dad passed away about 10-12 yr ago, now...continued consulting w/ EPRI (aka Electric Power Research Institute) which had been primary customer for several years at their I&C Center located at the Kingston Fossil site (TVA, west of Knoxville, TN) altho was also doing some work for CSI on products for them thru which I was running the consulting work while technically an employee in the new products engineering group (had just released the wireless accelerometer product line to manufacturing the month Dad passed away which you can look up for an idea of it altho it's modified significantly in the 10 years hence). While most work over the years was proprietary or internal R&D that doesn't have directly referenceable material, another product after the switch from the commercial nuclear to consulting was the software for a predecessor of the Remotec ANDROS robot that you can find quite a lot of info on the current products. The incarnation I worked on was a combination of the base vehicle w/ a manipulator arm and instrumentation package for man-replacement purposes in nuclear generating plants and was Westinghouse-purchased for use in some of their units in S Korea. This was while Remotec was still privately held by its founders (from ORNL there in Oak Ridge) several years before was purchased. (Showing my age, that version consisted of an onboard VME-bus two-processor 68000 w/ another operator console system w/ only a single processor. The system ran under CP/M w/ the operating code all LMI Forth. That was my first consulting job on own w/ one other fella' nearly 30 yr ago, now.)

I've several technical reports on models and evaluations of reactor safety and primarily incore instrumentation which was my specialty area outside the code maintenance (I was/am, after all, primarily an engineer, not a programmer despite almost continuous involvement in code-related work) while at the commercial reactor vendor. These are, however, not of much general interest altho a couple of the papers presented at ANS annual meetings did end up being referenced in a major textbook on radiation detection and instrumentation which was kinda' kewl... :)

I won't pretend that discussions on statistical analyses of samples of reactor containment vessel material evaluation for radiation-damage lifetime extensions of their concomitant reactors is a worthy subject for a.h.r so won't provide any report numbers of them or similar work... :)

Similarly, the evaluations of the nuclear design codes in comparison to physics startup measurements at the Oconee-class reactors submitted to and defended in front of the NRC ACRS for final licensing approval to allow power operation is somewhat esoteric and rather dull for those outside the field...

The more recent stuff is all pretty much tied up in EPRI owing to their licensing agreements and can't say too much about it. The last major task was development of technique for measurement of pulverized coal mass flow rates in individual pipes (typically 14-20" diameter) to large power boilers using advanced nonlinear signal processing techniques to infer the flow from the non-stochastic but chaotic (look up Lorenz attractor for the idea of non-random but non-repetitive processes) turbulence noise in the pipes as picked up on a high-frequency accelerometer.

Anyway, enough geezer talk...

--

Reply to
dpb

I have some understanding of the work you've been involved in enough to know how important it is to the safety of nuclear plants especially when it comes to predicting failure of the infrastructure. Tell me if I'm wrong in assuming that some of the testing involves actually sort of listening to the pipes to ascertain the condition? A good pipe has a particular (sound) or characteristic reaction to fluid flow when it's in good shape? I've worked with vibration sensors to monitor bearings in chiller plants before so the early signs of failure could be detected and equipment could be shut down and repaired before a failure could cause catastrophic damage. I'm also wondering about the types of failure that could be caused by high pressure, high velocity fluid flow in pipes in a plant that's running 24/7? You mention turbulence so I guess cavitation would be another concern? Do the high frequency vibrations cause stress fractures in the metal of the pipes, flanges and welds?

TDD

Reply to
The Daring Dufas

Well, thanks for nothing. At the New Years party last night I was discussing that with a couple of ladies and I promised to get back to them with some reports. My chances of getting laid just went down now thanks for you. They were really excited too when I talked about differing containment materials. .

Reply to
Ed Pawlowski

Chuckle, snicker...it's a bummer, ain't it? :)

--

Reply to
dpb

The Daring Dufas wrote: ...

That particular work wasn't terribly related to each other -- the pressure vessel samples are chunks of the reactor vessel material that are placed in special specimen holders designed into the vessels on initial installation and then removed after a specified set of intervals and tested. The primary test for these samples is for ductility (testing against radiation-induced embrittlement) to, as you inferred correctly, determine that the vessel and other high pressure components have not undergone excessive degradation so as to still be capable of withstanding operating pressures and temperatures.

The accelerometer measurements in piping I referred to earlier were for the pulverized coal flow distribution pipes in coal-fired boilers, not nuclear plants. These are fairly low pressure but high air flow volume pipes and the air is both for coal transport and is also a major fraction of combustion air. Since it isn't liquid fluid flow and is blown not pumped, cavitation isn't an issue there. The turbulence noise here is simply a byproduct of the transport system that we were using w/ the characteristics that fluid transport/flow isn't stochastic but chaotic to find information regarding coal and air flow rates buried in that ultrasonic signal we could pick up via the accelerometer. It had the major advantage of being non-invasive as coal dust is extremely abrasive so it's a major hassle to try to keep instrumentation alive that can survive inside a pipe. Being as how there is no free lunch, the counter problem was that the processing was quite intensive.

Back to your question regarding flow noise and measurements in nuclear plants -- in general, the answer is "yes, stuff like that is done" and is done routinely as you're familiar with for preventive maintenance and other various mechanical systems. If you've worked in the area much at all you've probably come across my last employer that I mentioned above, CSI (Computational Systems, Inc) in Knoxville, TN, now a subsidiary of Emerson Electric in the Rosemount catalog of instrumentation.

As for other similar measurements in reactors, the secondary piping and so on wasn't particularly my area of expertise. I'll note, however, though, that other than the reactor itself, the rest of the plant is really no different than are the other large generation plants in pressures and/or flow; in fact, super-critical fossil boilers run at much higher pressures and temperatures than do pressurized water reactors. The containment of such fluids was pretty much routine long before commercial nuclear power came along.

There is routine monitoring of primary reactor coolant pumps for such problems as you would expect. As a complete sidelight, interestingly, the reason the TMI accident progressed to the point it did was that the operators misinterpreted some pressure/temperature data and fearing cavitation in the RCPs turned them off, thus cutting off forced circulation in the core for several hours. The accident sequence was brought under control and began to be stabilized when the SRO of the subsequent shift recognized the issue and had the pumps restarted as well as the HPI (high pressure injection) system and recovered the core and reestablished core cooling. If the first crew had simply kept their hands in their pockets and let the safety systems and control systems "do their thing" there would have been no event other than a reactor trip and a manual reset of the PORVs and the plant would have gone back to normal operation in a week or so after some routine maintenance. A case where an event can be turned into a major one by a combination of mistakes after a mechanical failure (which wasn't terribly uncommon nor is unexpected, particularly, for a PORV to not automatically reclose which not being manually closed after it failed to reseat and not being recognized was open was the source of the primary coolant loss).

Some of the things that are unique to nuclear units that are done to monitor for early signs of failure or mechanical problems in the reactor include "loose parts monitors" and "neutron noise analysis". The first of these uses a group of accelerometers mounted in various places on the reactor vessel and primary coolant piping and "listen" for impact noises that could be the result of some reactor internals failure or similar. They are tied into systems that use a triangulation method on time of arrival for impacts to try to localize where within the plant any particular noise might actually be coming from. Did do the software for a prototype one of these systems for TVA way back when, too...just after the REMOTEC work. Unfortunately, then was about the time TVA was pulling back so only the one prototype was ever finished and by the time things picked up again, technology (and I) had moved on...

"Neutron noise" is a very interesting and intellectually and computationally challenging area -- it uses the small fluctuations in the signal of the excore neutron detectors and signal processing to infer things about reactor internals such as the movement of the core inner liner or fuel assembly vibrations. As the inner barrel moves slightly (on order of tens of mils), the change in water density owing the that slight change in thickness is discernible in a very small fluctuation in the neutron flux at the detector. By monitoring this in time, if something were to happen to one of the studs that holds the barrel in place, one could detect a larger amplitude of barrel motion (this has happened at at least on reactor I'm aware of). By knowing this before either the next outage or larger damage became apparent, one can monitor the situation and determine when or if an early shutdown would be required.

There are any number of other monitoring systems and instrumentation besides for almost all systems and certainly for those that are directly safety related.

Again, undoubtedly, far more than one might care about in ahr... :)

--

Reply to
dpb

I think I understand about the pulverized coal. Ultrasonic transducers are used to measure the flow of material through the pipes because a paddle wheel would quickly disintegrate? Rereading what you wrote it seems that you were listening for a specific resonance or you were trying to separate the sound of the airflow from the noise of the flow of the pulverized coal mixed with it. Is that what all the processing power is required for? It kind of reminds me of what modern military sonar systems do. Heck, I find everything interesting, back in the last century BI, "Before Internet", I spent a lot of time in libraries reading every sort of engineering, scientific or medical journal I could get my hands on. Years ago, there was a Star Trek convention in Huntsville, Alabama hosted by NASA engineers and instead of looking for Mr. Spock, I was hanging out with the engineers looking at and discussing all the neat stuff they had on display like cross sections of the Space Shuttle fuel tank showing the different layers in its construction. Don't worry about the subject matter, I find it all interesting, besides, it will make me go searching The Web for more information. :-)

TDD

Reply to
The Daring Dufas

The Daring Dufas wrote: ...

Anything like a paddle wheel wouldn't make it 5 minutes. :)

In an early stage of the work, I tried a hardened steel drill rod as a sounding rod in one small-scale test facility. The test duration was only for a couple of days and it came out oval in cross-section in even that short a time frame with a good third of the frontal surface gone. The utilities are very reluctant to insert stuff into the coal pipes that could fail and either block the flow in a pipe w/ a resultant high pressure event back at the pulverizer or block a burner nozzle in the furnace and cause an event there. The air:fuel mixture is right at the limits and so it's a real danger of fire or explosion if something goes wrong outside the boiler or in a coal pipe. Needless to say, an open

14" pipe w/ a burn out isn't a desirable event... :) In at least few cases where they have happened the flame has actually melted the side of a boiler containment and ended up w/ an entire boiler open. That wreaks havoc in a plant _very_ quickly. :(

As far as more explanation of exactly what the computations are, unfortunately, the actual technique is proprietary but it does not look at the signal in a conventional sense at all; it is not, as I've stated, based on frequency components per se, but on the fact that turbulent flow is chaotic, not random. It doesn't repeat exactly, but there are certain patterns and we have identified some 30 scale-invariant measures that can be calculated from the broadband ultrasonic signal as picked up by a passive accelerometer as minute vibrations transmitted through the pipe wall. We do not introduce any additional energy into the pipe at all as does a classic ultrasonic detector.

There were some other research teams looking at other techniques such as microwave and/or more approximating conventional ultrasonics but our technique was/is unique.

Reply to
dpb

In other words your sensors were passive listening devices not active like the ultrasonic flow sensors I'm familiar with? I'm guessing you were looking for a semblance of a pattern in the white noise of the chaos. Am I getting warmer? :-)

TDD

Reply to
The Daring Dufas

The Daring Dufas wrote: ...

Yes, as I said they were/are simply high frequency (>75 kHz resonance) commercially available accelerometers...

Except white noise implies a stochastic process whereas a chaotic process, while not strictly repeatable, is not stochastic. Hence, typical statistical measures used for such processes are not effective and the measures used in the analysis are, as mentioned, nonlinear and then combined in other nonlinear ways for prediction. (I know, that's a lot of mumbo-jumbo unless one already knows the answer, but as I say, the specifics are proprietary so can't really reveal much more about the details).

Suffice it to say that there is information buried in the audible and supersonic noise of the flow and that can be related to the actual air and coal flow rates in a given pipe over a range of operating conditions and air:fuel ratios and flow rates by a set of specific operations on the recorded waveform. One important feature of these computations is that they all produce quantities that are independent of the actual magnitude of the signal itself (iow, they're self-normalizing). This is a key feature in that it means that simply a level change from a location difference doesn't affect a given signature. It also means, of course, that simple measures such as the mean aren't what is giving the actual correlations. :) But, from those correlations a prediction of flow is possible for a given new set of measures computed for the same pipe from any set of operating conditions and this has been shown to be valid over a range of operating conditions and at various power plants of differing sizes and styles and manufacturer (albeit the correlations are at least to this point plant-specific, the measures used in those are for the most part the same ones of of the total set of those identified as candidates).

--

Reply to
dpb

HomeOwnersHub website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.