Why Do We Need Software Engineering?

To understand the necessity for software engineering, we must pause briefly to look back at the recent history of computing. This history will help us to understand the problems that started to become obvious in the late sixties and early seventies, and the solutions that have led to the creation of the field of software engineering. These problems were referred to by some as “The software Crisis,” so named for the symptoms of the problem. The situation might also been called “The Complexity Barrier,” so named for the primary cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that are now included under the title of software engineering, we have made and are continuing to make progress.

In the early days of computing the primary concern was with building or acquiring the hardware. Software was almost expected to take care of itself. The consensus held that “hardware” is “hard” to change, while “software” is “soft,” or easy to change. According, most people in the industry carefully planned hardware development but gave considerably less forethought to the software. If the software didn’t work, they believed, it would be easy enough to change it until it did work. In that case, why make the effort to plan?

The cost of software amounted to such a small fraction of the cost of the hardware that no one considered it very important to manage its development. Everyone, however, saw the importance of producing programs that were efficient and ran fast because this saved time on the expensive hardware. People time was assumed to save machine time. Making the people process efficient received little priority.

This approach proved satisfactory in the early days of computing, when the software was simple. However, as computing matured, programs became more complex and projects grew larger whereas programs had since been routinely specified, written, operated, and maintained all by the same person, programs began to be developed by teams of programmers to meet someone else’s expectations.

Individual effort gave way to team effort. Communication and coordination which once went on within the head of one person had to occur between the heads of many persons, making the whole process very much more complicated. As a result, communication, management, planning and documentation became critical.

Consider this analogy: a carpenter might work alone to build a simple house for himself or herself without more than a general concept of a plan. He or she could work things out or make adjustments as the work progressed. That’s how early programs were written. But if the home is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house is to be built. Plans need to be reviewed with the future owner before construction starts. And if the house is to be built by many carpenters, the whole project certainly has to be planned before work starts so that as one carpenter builds one part of the house, another is not building the other side of a different house. Scheduling becomes a key element so that cement contractors pour the basement walls before the carpenters start the framing. As the house becomes more complex and more people’s work has to be coordinated, blueprints and management plans are required.

As programs became more complex, the early methods used to make blueprints (flowcharts) were no longer satisfactory to represent this greater complexity. And thus it became difficult for one person who needed a program written to convey to another person, the programmer, just what was wanted, or for programmers to convey to each other what they were doing. In fact, without better methods of representation it became difficult for even one programmer to keep track of what he or she is doing.

The times required to write programs and their costs began to exceed to all estimates. It was not unusual for systems to cost more than twice what had been estimated and to take weeks, months or years longer than expected to complete. The systems turned over to the client frequently did not work correctly because the money or time had run out before the programs could be made to work as originally intended. Or the program was so complex that every attempt to fix a problem produced more problems than it fixed. As clients finally saw what they were getting, they often changed their minds about what they wanted. At least one very large military software systems project costing several hundred million dollars was abandoned because it could never be made to work properly.

The quality of programs also became a big concern. As computers and their programs were used for more vital tasks, like monitoring life support equipment, program quality took on new meaning. Since we had increased our dependency on computers and in many cases could no longer get along without them, we discovered how important it is that they work correctly.

Making a change within a complex program turned out to be very expensive. Often even to get the program to do something slightly different was so hard that it was easier to throw out the old program and start over. This, of course, was costly. Part of the evolution in the software engineering approach was learning to develop systems that are built well enough the first time so that simple changes can be made easily.

At the same time, hardware was growing ever less expensive. Tubes were replaced by transistors and transistors were replaced by integrated circuits until micro computers costing less than three thousand dollars have become several million dollars. As an indication of how fast change was occurring, the cost of a given amount of computing decreases by one half every two years. Given this realignment, the times and costs to develop the software were no longer so small, compared to the hardware, that they could be ignored.

As the cost of hardware plummeted, software continued to be written by humans, whose wages were rising. The savings from productivity improvements in software development from the use of assemblers, compilers, and data base management systems did not proceed as rapidly as the savings in hardware costs. Indeed, today software costs not only can no longer be ignored, they have become larger than the hardware costs. Some current developments, such as nonprocedural (fourth generation) languages and the use of artificial intelligence (fifth generation), show promise of increasing software development productivity, but we are only beginning to see their potential.

Another problem was that in the past programs were often before it was fully understood what the program needed to do. Once the program had been written, the client began to express dissatisfaction. And if the client is dissatisfied, ultimately the producer, too, was unhappy. As time went by software developers learned to lay out with paper and pencil exactly what they intended to do before starting. Then they could review the plans with the client to see if they met the client’s expectations. It is simpler and less expensive to make changes to this paper-and-pencil version than to make them after the system has been built. Using good planning makes it less likely that changes will have to be made once the program is finished.

Unfortunately, until several years ago no good method of representation existed to describe satisfactorily systems as complex as those that are being developed today. The only good representation of what the product will look like was the finished product itself. Developers could not show clients what they were planning. And clients could not see whether what the software was what they wanted until it was finally built. Then it was too expensive to change.

Again, consider the analogy of building construction. An architect can draw a floor plan. The client can usually gain some understanding of what the architect has planned and give feed back as to whether it is appropriate. Floor plans are reasonably easy for the layperson to understand because most people are familiar with the drawings representing geometrical objects. The architect and the client share common concepts about space and geometry. But the software engineer must represent for the client a system involving logic and information processing. Since they do not already have a language of common concepts, the software engineer must teach a new language to the client before they can communicate.

Moreover, it is important that this language be simple so it can be learned quickly.

Keeping Bandages Dry When Showering

 Whenever you try to shower while wearing a dressing it’s quite a challenge. You already know it will be necessary to substitute it with a clean dry one later. How are you supposed to keep clean if the doctor has told you to keep the bandage dry? Avoid annoying soggy surgical dressings and think about using some of these methods of keeping bandages dry when you are in the shower.  

1. Consider putting on a plastic bag. By placing your dressing or plaster cast into a plastic bag, you can avoid it getting wet. Be sure there are no holes in the plastic, then put arm or leg through the opening. Use strong adhesive tape such as duct tape to secure the top portion and prevent any leaks. If you need to keep a bandaged hand dry in the shower, protect it with a rubber glove secured with waterproof tape to avoid leaks.  

2. Try plastic wrap. If the placement of the dressing means a bag is no use, try plastic wrap. You should ask a friend to help you with wrapping up your dressing with plastic wrap and in taping around the edges. You must be certain to wrap a place bigger than the bandage.  

3. A condom is an imaginative way to protect! When you need to keep a dressing on your finger or your toe, put a condom over the bandage and use waterproof tape to seal up the ends. (Make sure, though, that your rubber isn’t lubricated!)  

4. Buy a proper covering for the dressing. A shower bandage cover is a good investment if you are expecting to wear large ones for an extensive period of time. If you are looking for an inexpensive way to keep your dressing completely dry in the shower you can use the ‘Shower Sleeve’, which works well and is under ten dollars. The packages include all that is needed to keep your injury dry while it gets better.  

5. Think about a sponge bath. If you actually think it is too hard to keep dry bandages while you take a shower, maybe you should think about trying another method to clean yourself. If the position of the bandage allows, you can sit in your bathtub without inserting the injured part of your body. Equally, possibly all you need to do is have one leg out of the shower cubicle as you speedily wash the rest of yourself. All you then have to do is wipe off your leg with a cloth. While somewhat uncomfortable, it will enable you to keep the dressing dry while you get clean.

Top 6 Ways to Get An Angry Customer To Back Down

1. Apologize. An apology makes the angry customer feel heard and understood. It diffuses and anger and allows you to begin to re-establish trust. Not only that, but pilot studies have found that the mere act of apologizing has reduced lawsuits, settlement, and defense costs. You need to apologize to customers regardless of fault. Certainly, the apology needs to be carefully worded. Here’s an example of a sincere, yet careful apology:

“Please accept my sincere and unreserved apology for any inconvenience this may have caused you.”

2. Kill Them Softly With Diplomacy. This simple phrase has never failed me: “Clearly, we’ve upset you and I want you to know that getting to the bottom of this is just as important to me as it is to you.” When you say this, anger begins to dissipate. You’ve addressed the anger directly and non defensively and you haven’t been pulled into the drama of the attack.

3. Go into Computer Mode. To use Computer Mode you take on the formalities of a computer. You speak generally, without emotion, and you don’t take the bait your angry or difficult customer is throwing you. Your words, tone, and attitude are completely impersonal and neutral – (Think of the automated response system you speak to when you call your wireless phone company or bank.)

This “computer mode” response deflects, diffuses, and disarms angry customers because you don’t add fuel to the fire by giving your difficult customer what they want -an emotional reaction. When you don’t take the bait, the difficult customer is forced to stop dead in their tracks. And that means you regain control (and confidence).

The Computer Mode Approach In Action

Let’s say your customer says:

“You don’t give a d*** about customers. Once you get a customer locked into a contract, the service aspect is over.”

While it may be tempting to fuel the fire with an equally hostile response such as “What’s your problem, creep?”

Don’t take the bait. If you do take the bait, the situation will only escalate and nothing productive or positive will result. A computer mode response might look like this:

“I’m sure there are some people who think we don’t care about servicing customers.”

“People get irritated when they don’t immediately get the help they need.”

“It’s very annoying to experience a delay in service response.”

“Nothing is more distressing than feeling like you’re being passed around when all you want is help.”

And then you stop -like a locked up computer.

No matter how uncomfortable the verbal abuse is or how ridiculous it becomes, continue to respond without emotion. This tactic works because it is neutral, doesn’t take the bait, and because it is unexpected. The difficult customer wants to throw you off, make you lose control, and to get you to respond emotionally. When you fail to do each of these things, you actually regain control.

Go into “computer mode” the next time you’re faced with verbal abuse from an irate or unreasonable customer, and I promise you, you’ll quickly regain control —and you’ll have fun with the process.

4. Give this question a shot: “Have I done something personally to upset you?… I’d like to be a part of the solution.” Of course, you know you haven’t done anything to upset the customer. You ask this question to force the angry customer to think about his behavior. Often, the mere asking of this question is enough to get the ballistic customer to begin to shift from the right brain to the left brain, where he can begin to listen and rationalize.

5. Show empathy – Empathy can be a powerful tool used to disarm an angry customer and show that you genuinely care about the inconvenience the customer has experienced. Expressing empathy is also good for YOU, as it helps you truly begin to see the problem from the customer’s perspective/and this perspective will help keep you from losing your cool when your customer gets hot. By letting customers know that you understand why they are upset, you build a bridge of rapport between you and them.

Here are some phrases that express empathy:

o “That must have been very frustrating for you.”

o “I realize the wait you encountered was an inconvenience.”

o “If I were in your shoes, I’m sure I’d feel just as you do.”

o “It must have been very frustrating for you have waited five days for your order and for that I am sorry.”

6. And finally, here’s a tip that works like magic. …. Show appreciation for the difficult person’s feedback. After your difficult customer has ranted and raved, you can regain control of the conversation by interjecting—not interrupting, but interjecting to thank them for taking the time to give you feedback. You can say something like:

Thanks for being so honest.

Thanks for taking the time to let us know how you feel.

We appreciate customers who let us know when things aren’t right.

Thanks for caring so much.

The reason this tip works so effectively is because the last thing your irate or unreasonable customer expects is for you to respond with kindness and gratitude. It’s a shock factor and many times you’ll find that your customer is stunned silent and this is exactly what you want. When the customer is stunned into silence, you get in the driver’s seat and steer the conversation in the direction you want it to go.

When you do these things you’ll find that being on the receiving end of verbal abuse doesn’t have to be threatening or intimidating. You can come across as confident, composed and strong…and most importantly, you’ll regain control of the conversation.

History of the Computer – Cache Memory Part 1 of 2

We looked at the early digital computer memory, see History of the computer – Core Memory, and mentioned that the present standard RAM (Random Access Memory) is chip memory. This conforms with the commonly quoted application of Moore’s Law (Gordon Moore was one of the founders of Intel). It states that component density on integrated circuits, which can be paraphrased as performance per unit cost, doubles every 18 months. Early core memory had cycle times in microseconds, today we are talking in nanoseconds.

You may be familiar with the term cache, as applied to PCs. It is one of the performance features mentioned when talking about the latest CPU, or Hard Disk. You can have L1 or L2 cache on the processor, and disk cache of various sizes. Some programs have cache too, also known as buffer, for example, when writing data to a CD burner. Early CD burner programs had ‘overruns’. The end result of these was a good supply of coasters!

Mainframe systems have used cache for many years. The concept became popular in the 1970s as a way of speeding up memory access time. This was the time when core memory was being phased out and being replaced with integrated circuits, or chips. Although the chips were much more efficient in terms of physical space, they had other problems of reliability and heat generation. Chips of a certain design were faster, hotter and more expensive than chips of another design, which were cheaper, but slower. Speed has always been one of the most important factors in computer sales, and design engineers have always been on the lookout for ways to improve performance.

The concept of cache memory is based on the fact that a computer is inherently a sequential processing machine. Of course one of the big advantages of the computer program is that it can ‘branch’ or ‘jump’ out of sequence – subject of another article in this series. However, there are still enough times when one instruction follows another to make a buffer or cache a useful addition to the computer.

The basic idea of cache is to predict what data is required from memory to be processed in the CPU. Consider a program, which is made up of a series instructions, each one being stored in a location in memory, say from address 100 upwards. The instruction at location 100 is read out of memory and executed by the CPU, then the next instruction is read from location 101 and executed, then 102, 103 etc.

If the memory in question is core memory, it will take maybe 1 microsecond to read an instruction. If the processor takes, say 100 nanoseconds to execute the instruction, it then has to wait 900 nanoseconds for the next instruction (1 microsecond = 1000 nanoseconds). The effective repeat speed of the CPU is 1 microsecond.. (Times and speeds quoted are typical, but do not refer to any specific hardware, merely give an illustration of the principles involved).