msgbartop
Code that explodes conventions
msgbarbottom

30 Jun 11 In Software Development Failure Is The Key To Success

Every person, organization, and team fails.  The question - and the key to success as well - is how that failure is dealt with.  Intelligent people, organizations, teams, etc. learn from their failures in order to assure that same failure doe not occur again.

Software development is no different than any other enterprise.  Those who learn from their mistakes succeed.  Those who repeat the same mistake tend to fail.  What's the lesson?  Learn from your failures and do not shame them into the back of the room.  Admit that you're wrong and make sure that the same mistake is not repeated.

This seems counter-intuitive.   We're conditioned to think that failure results from stupidity or incompetence.  Plenty of people who are both more intelligent and competent than I have failed miserably.  Some of those failures have not only led to a loss of money but of human life as well.  Such failures are unadulterated tragedies and they should not be dismissed but such things happen nonetheless.    After they have occurred the only sane thing to do is to make sure that such disasters do not happen again.

With regard to software development, do not be afraid to make mistakes.  Some of that you make will bad.  Hopefully, none of your mistakes will result in disasters, but, if they do , be strong enough to admit that you made a mistake.  Remember, shaming the maker of the mistake does absolutely no good.  Fixing the source of the mistake is all that matters.  As a result, both in your own practice and on your team a culture of openness with regard to errors is best.

I am not saying that incompetence should be tolerated.  If an error clearly resulted from incompetence, then the incompetent party or parties should absolutely be scolded or even fired.  However, scolding does not fix the problem.  Only knowledge and insight into the source does, so do not sacrifice the chance to improve for the childish need to punish.  Smart organizations and programmers learn that lesson.

30 Jun 11 Is College Overrated For Programmers?

Computer programming something that can be learned on your own.  There is no fundamental need for formal instruction.  From my experience, the knowledge gained from real-world experience has been more valuable than what I learned in the classroom.  The question that begs to be asked is "do programmers really need to go to college?"

The short answer is "no."  There is little that you can learn in college that cannot learn on your own or through practical experience, yet that obscures some facts about learning by doing.  Most self-taught programmers live by the code of whatever works.   As a result, they tend to hack together solutions.  While that solution may be effective, creative, and even efficient, hacked solutions are often unmaintainable, inflexible, and inextensible.  In short, hacked solutions are poorly engineered.

Poor engineering quality has widely varying costs depending upon the software in question.  Web sites that are not mission critical or intended to generate large revenue streams do not need to all that well-engineered.  The cost of failure is not that high, so less attention can be paid to mitigating failure.  "Hackers" are perfectly viable in this realm.  If that's the realm where you would like to live, then a formal envir0nment like college is probably not for you.

College teaches rules.  Beyond the basics of writing code and the specifics of individual languages, there are provable rules that any good software engineer should live by.  In addition, there are a great many more rules of thumb that software engineers should be aware of  as well.  Learning those rules and rules of thumb on your own is difficult due to the whatever works environment.  College is the place where you're mostly to pick them up.  If you want to be software engineer instead of hackers, college is the place for you.

I'll be the first person to criticize the university system.  There are some pretty heavy flaws in its focus with regard to technical disciplines such as software development.  Still, college is the place where a person is most likely to learn how to be software engineer.  My recommendation would be to find one that focuses on combining practical experience with theoretical knowledge.  That really is the best way to learn how to develop software.

30 Jun 11 Class Hierarchies in C# and C++

Leaving aside the pointers and memory management, C# and C++ are very similar languages.  C++ offers something,while not always useful and possibly harmful, that C# does not: multiple base classes.  Admittedly, multiple bases classes are not a make-or-break feature, but they can allow for greater reuse of code.

C# and ASP.Net off a tremendous tool set for building a robust web UI.  A step in-developing that UI is to building a set of pieces to construct that UI from.  Those pieces are often custom extensions of common controls such as text boxes.  Across of the controls, including the custom ones, common functionality exists.  C# only allows a single base class.  The ability to add multiple interfaces mitigates this problem to a great degree.  In the case of spreading common implementations across extensions of library classes, the limitation that one base class imposes makes code reuse more difficult.

C++, which allows multiple base classes, has no such restriction.  The removal of this restriction allows for code to be reused much more easily.  Code reuse is a significant aspect in well engineered software.  The more code is reused the fewer places it has to be changed.  Fewer places of change leads to increased maintainability, which, ultimately, means fewer bugs.Multiple bases classes does introduce some issues.

Constructor and desctructor order has to follow strict rules.  To my understanding, constructors follow order and destructors follow reverse order.  The coder, as a result, as to be aware of the dependencies within his class between bases when he lists base classes.  An error in judgement could lead to issues in construction and destruction.   Such bugs may be difficult to stop or they may be looming without the development being aware.  Clearly, solid unit testing and awareness of the possibility of the problem could get around the issue, but there is no hiding that fact that multiple base classes introduces new problem.

In fact, the issue of the function definition of piece of data arises too.  Which implementation does the use?  My guess would be the implementation of the first declared, but I cannot say for sure.  That sort of unpredictability could lead to all sorts difficult to find bugs or bugs that loom without the team being aware.  In this case, a little engineering discipline avoids the problem.  The fact that such confusion could arise introduces new problem into the already murky world of software development.

While C# may not be as flexible as C++ and that flexibility may lead to less code reuse, that lack of flexibility leads to less unpredictability with regard to inheritance.  When Microsoft designed C#, it was probably aware of this issue, so predictability was chosen over flexibility.  I can't saw that Microsoft made the incorrect choice; although, C#'s lack of flexibility does occasionally annoy me, but I can live it.

22 Jun 11 What Is Erlang?

Erlang has come into focus the last several years due to Facebook's decision to use the language to achieve the scalability necessary to implement its chat product.  Facebook didn't invent Erlang and is hardly its most impressive implementor.  Erlang was, at one point, proprietary to Ericsson, the Swedish telecommunications giant.

Ericsson, according to wikipidia, used erlang to run one one of its switches, which achieved nine 9's reliability.  For a big company to use language to run a mission critical piece like a switch on language like Erlang speaks volumes.  The Facebook implementation speaks well of language but it hardly communicates value like a telecommunications giant relying upon it.

More to the point, Erlang was designed from the ground up to be highly concurrent and highly fault-tolerant.  For certain types of work, that ability to run lots of lightweight threads concurrently for long periods of time is gigantically usefully.  Computer software, especially distributed ones, requires a myriad of different technologies.  Erlang fits in that spectrum quite well, because it succeeds where many languages stumble.

Java, C++, C#, etc. are quite clumsy when it comes to threading.  For the record, I'd like to say that Java is pretty good, but is still too heavy compared to Erlang.  Consequently, Java's threading model, while still pretty good, isn't quite good enough with regard to lots of light concurrent work.  Erlang succeeds perfectly in that role.

Why not right all of your software in Erlang?  While Erlang supports a mind blowing number of concurrent threads, its take longer to do the required work than languages like Java, C#, and C++.  Again, the way to rectify that problem is to combine several different types of languages together.  Erlang, in my opinion, happens to one those languages that should be included.  I'll be attempting to explore how its used in the coming months.  It may well be worth your exploration as well.

18 Jun 11 How Do I Evaluate A Technology?

The first step is reading the literature on the technology from the company.  This will give you a lay of the land.  What the company says that a technology is and who it is aimed at will give something to compare the reality of that technology to.  A company's word cannot be taken ex cathedra.  As a side note, no one's word should be taken ex cathedra.

The next step involves looking into what people have to sat about the technology.  Popular technology magazines and sites have information on the technology that you may be interested in but should look for weightier sources whose focus is more on engineering.  My personal favorite is Dr. Dobbs Journal, but there are others out there.  As a side note, someone whom you can talk to personally about a technology is more valuable than a magazine article.  Seek those people out if you can find them.  Anyway, the information that you find will let you know what others who are more experienced than you feel about the technology that you interested in.  This will server as a counterbalance to literature that you got from the company.

The next step, after you've read up on something, is look into who's using it or investing in it.  This can mean more good pub.  All of the good pub in the world cannot replace the fact that no one is investing in something or using it.  This is very important and speaks volumes to the value of a technology.  If any the big boys or the major defense, aerospace, or automotive companies are using and/or investing, then a given technology instantly gains credibility.  Any such technology should be strongly considered.

Secondhand knowledge does not trump firsthand knowledge.  In order to truly evaluate a technology, it must be used.  Start building or writing something with it.  Experiment with it works.  Figure what is good about and what is bad.  Determine the technologies strengths and what its pitfalls are.  Lastly, come to conclusion on how those strengths can be enhanced and how the pitfalls can minimized.

That firsthand knowledge combined with the secondhand knowledge described above will give you all of the information that you need to correctly evaluate a technology.  The ability to correctly assess a technology is one of the keys to both business and engineering success.  If you follow what I described above, then your likelihood of succeeding in both your engineering endeavors and business endeavors will go up.

18 Jun 11 Do I Need a Business Logic Layer?

What is a Business Logic Layer?

A business logic layer contains the lion share of the logic of your application.  All web applications have logic.  However, business logic is more complicated and sophisticated.  Most of the logic of your standard web app is some if statements and for loop contained in scriptlet.  Busienss logic contains complicated process rules and operations that can span thousands of lines of code.

What Does a Business Logic Layer Do?

A Business Logic Layer provides logic that sites between your application and you data access.  This can inside access control, the firing of events, the starting of a work-flow process, etc.  Essentially, your site if the presentation layer and the Business Logic Layer is what the application actually does.  What that is will differ drastically from web app to web app and is beyond the scope of this post.

Do I Need a Business Logic Layer?

My default is no.  Most sites doe not require complicated process rules and operations.  Your generic web app is simply a presentation layer sitting in front of a database.  Those sorts of sites do not need a business logic layer.  There is a caveat.

Your site may not need one now but it might in the future.  Building skeleton of a Business Logic Layer now even though the site may not need it would actually be a good idea in that case.   How do you know that you'll need a Business Logic Layer  in the future?  Functionality like notifications and workflows require a business logic layer. They are very difficult to write without one.  If your site is likely to require something like that, then you should write the Business Logic Layer now.

This will save you both time and energy in the future.  Your site is already integrated with it, so you will have to rewrite less code.  Plus, adding the desire functionality means only expanding on existing code not rewriting a whole new piece of the software.  Extending is always easier.  Take that route if you can.  You can thank me later. :D

15 Jun 11 Using the Correct Technology in the Correct Place

My article on minimizing the effects of bottlenecks discussed distributing responsibility and my article on the trend in development talked about using multiple languages to build software.  To sum up the situation, the approach should be to use the correct technology in the correct place.  Achieving such a goal is easier said than done.

What technologies are available?  Where can they be used?  What can those technologies do?  These questions will not answer themselves.  Research and trial-and-error are required to answer them fully.  Both of those endeavors require time and patience.  Neither is in abundance in production projects.

Time is money in business and getting to market quickly has advantages as well.  Every second spent adds cost to the product.  The longer the market has to wait the more likely buyers are to get antsy and leave.  As a result, the business reasons for taking extra time and having patience are legitimate.  Because of that business reality, most projects do not research or experiment with a myriad of technologies unless the requirements of the undertaking absolutely mandate that such an approach be taken.

That shortsighted often breeds problems in the future.  The cost of those problems can vary, but sometimes those problems can be significant.  What is an example of such a looming disaster caused by shortsightedness?  Facebook is a popular web app with millions of users.  Facebook is built on PHP, which is slow.  That slowness, undoubtedly, is compensated in other places, but that decision has costs around which Facebook had to engineer.

I am not knocking Facebook.  It was developed in a dorm room.  PHP was probably chosen, because it was fast and easy, but that is not the point.  Technology choices have consequences.  Not taking the time to investigate options could box you into a sub-optimal solution.

Where is the balance point between being short-sightedly impatient and missing a market?  That is a difficult question to answer in generalities, but I can that you should make the right decision regarding fundamental technologies.  Those are more difficult to change and making poor decisions in those realms is more costly.

Keep this in mind as you develop your projects.  Remember to take the time to weight your options in consequently areas.  If your project becomes more a permanent gig, then making such a decision will be worth its weight in gold.

14 Jun 11 The Joy Of Hash Tables

Arrays and the spectrum of lists are hugely useful when you want grab an entire collection.  Even if you want to grab one record, an array is pretty and a list, as long as it is not too long, is not bad either.  In the case of both arrays and lists, however, you have to know where that record is located or when you have found it.  That requires either keeping track of indexes or having the logic to know when a record match is found.  In the case of searching, if the index is unknown, you have to keep searching until you've found the record.  The big O of this is not particularly good.  On top of that, coding such a solution can be messy.

Hash tables provide a cleaner solution.   A hash table, as the name suggests, has table and each entry in that table has a unique key associated with that entry.  When add entry, you pass the desired value and the key that identifies the record.  To grab the record, you pass the key to the hash table and you get the value back.  Each record takes the exact same amount of time to grab.  For grabbing one record at random, this is an improvement over arrays and lists.  When you're face with such an issue in your software, give hash tables a look.  They can be very useful.

Hash tables are not perfect.  You do not use them to grab every record.  Having to hash every key would add overhead that arrays and lists do not have.  Consequently, you should only use hash tables when you have a need to grab a single or small number of records at random.  Don't be cautious.  You'll know when hash tables are a bad fit.

13 Jun 11 How To Minimize The Impact Of A Bottleneck

Every software has bottlenecks.  The question is how can the impact each bottleneck, or bottlenecks in general, be minimized.  I've got a general approach that I'd like to share.

All software has responsibilities.  Those responsibilities have the potential to hamper software performance if implemented incorrectly.  The question that must be asked is how should each responsibility be implemented?

That's a question that highly specific to the responsibility and the type of software.  However, a general approach to achieving all of the responsibilities of software will guide you in the correct direction.   What is that approach?  Responsibilities should be distributed.  Bottlenecks are often a symptom of over reliance on something.  Over reliance can be minimized by distributing responsibilities.  Where should each responsibility be placed?

Each responsibility, unless there is something preventing you from doing so, should be distributed to where that responsibility can be best handled.  You have to understand both the responsibility and the technology on which your software is built to effectively do this.  Of course, such distribution is likely to make software development more difficult, but that's the price for improving your software.

I worked on a web-based flex chat system that need to be able to handle a very large number of concurrent users.  Many such systems use shared objects to manage things like presence and message passing.  This is easy to implement, but the shared objects are a bottleneck to performance.  I used tomcat, remote method calls, and message sending to handle the management of state.  Tomcat kept track of who was in what chat room and handled the responsibility of keeping the users update.  The client did have to any state management.  This hugely improved performance, because responsibilities were delegated properly.

When you write your software, that is the sort of the thing that you should look for.  Where a responsibility can be shifted too and where it can be shifted too effectively?  If you become good at doing that, then your software will perform better.

13 Jun 11 Bit wise Exclusive-Or As Boolean Exclusive-Or in C#

If you're like me, you've looked for ways to simplify your boolean logic.  Exclusive-or logfic is particularly difficult since there is no exclusive-or operator in C#.  I decided to run an experiment.  If boolean values were implemented the way that I figured that they were, then bit wise exlusive-or should function the same as the boolean operator would.  I decided to test the theory out.  It turns out that my intuition was correct.  The test code is below.  Give it a try.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace CodeTester1
{
    class Program
    {
        static void Main(string[] args)
        {
            bool val1 = true;
            bool val2 = true;
            bool val3 = val1 ^ val2;
            Console.Out.WriteLine("val3 is: " + val3);
            val2 = false;
            val3 = val1 ^ val2;
            Console.Out.WriteLine("val3 is: " + val3);
            val1 = false;
            val3 = val1 ^ val2;
            Console.Out.WriteLine("val3 is: " + val3);
            pause();
        }

        #region Utility Methods

        /*
         * barrowed from: http://cboard.cprogramming.com/csharp-programming/83394-system-pause-csharp-warning-noob-question.html
         * pauses a console program
         */
        public static void pause()
        {
            Console.Write("Press any key to continue . . . ");
            Console.ReadKey(true);
        }

        #endregion
    }
}
Loading...
Join thousands and get updates for free...No-Spam Guarantee