The purpose of refactoring is to improve the understandability of your code, yet that definition provides no clues as to achieving that improvement. A general guide would be greatly helpful. That guide could help direct you regarding the basic approach to refactoring, which should make refactoring much less confusing. Over the course of several paragraphs, I will attempt to provide such a guide to clear up some of the confusion.
Every guide needs a starting point, so I will begin by defining one. The starting point for refactoring is an assessment of your application. You will be assessing the understandability of the various aspects of your application. Choosing how your application is broken down is up to you, but the question of the assessment process still needs to be answered. This process is highly subjective and not particularly amenable to precise analysis, but I can provide you with some aspects of code to look at, which should help guide you in evaluating the understandability of your code.
The clarity of the control flow is the first and most essential aspect to assess. The control flow is the main portion of the logic. If the control flow is not clear, then logic is not clear. Unclear logic generally indicates that code is difficult to understand, which could mean spaghetti code. Understanding spaghetti code is extremely difficult. Consequently, clarity of control flow and, therefore, logic is essential for code to be understandable.
An understandable physical layout of code is just as essential as clear control flow. This concept is a little more nebulous but a few examples should elucidate it. For example, where are variable declared relative to their first usage? A large number of lines between declaration and first usage hurts understandability. A greater number of lines between declaration and usage force a coder to traverse a greater distance in order to find the variable initially, and then, later, its uses. Traversing a greater distance harms not only readily ability but also the memory of the developer. Consequently, the shortest possible distance between declaration and initial usage is the way to go. In addition, incrementing or decrementing a variable on the same line as a comparison also hurts understandability. Two operations are occurring in succession on one line. The developer may very easily overlook one. Those lines should be laid out as two separate lines to prevent such confusion. There other examples, but the point is that ordering of code matters. Like control flow, better ordering leads to better understandability.
The last main aspect of assessment is naming. How variables, structures, classes, and functions are named is highly important, as bad or inconsistent naming schemes make code more difficult to understand. When a programmer scans code, names should evoke certain information about what he is looking at. If the programmer has to scratch his head or guess what something is used for, then understanding an application becomes more difficult. Consequently, the quality and consistency of naming should be evaluated as well.
The step after assessment is more straightforward. The aspects of a program are ranked from least understandable to most understandable. Starting from the least and heading towards the most, that list is refactored. By the end of the list, your program should be much more understandable. The process could be fairly easy or very difficult but your code will be much improved either way.
The actual process of refactoring is not described here. I only provided a general process. More specific methods will be provided later. The process described in the paragraphs above should serve as a solid guide. If you’ve followed the proves, you have probably determined what you need to do in order to make your code more understandable. For specific help in refactoring, read future posts. Good luck in refactoring your code.
Experienced and even some inexperienced programmers may sneer at my attempt to define such a commonly known term, yet successful refactoring cannot be achieve without knowing precisely what it is. Before writing about the process of refactoring code, I will attempt to define it in a precise, clear manner. The definitional process will begin by outlying the process of defining a word in general. That general process will then be applied to task of defining of what refactoring is. Let us begin by defining a process for creating definitions.
Every word and concept has a synoptic meaning that serves as a bird’s view; the nuances and exceptions are stripped away and what is left is the simplest, most common meaning. This definition must be established first, as the more subtle aspects depend it. Then, nuances are added and exceptions are appended. For complex words and concepts, there may be many nuances and exception, as a word or concept may be quite varied; however, all nuances and exceptions are still dependent upon the original concept described in the synoptic definition. In short, the definitional process involves elucidating a synoptic definition, and then, laying out both the nuances and exceptions to that original concept.
How can the concept of refactoring fit onto the process described above? At its core, refactoring is a simple concept, the reorganization of code. Synoptically, refactoring is simply reordering. As daunting and dizzying as refactoring may be, reorganizing and reordering is all that it is. Nuances and exceptions exist that make the concept more complex, however.
What is the purpose of the reordering and reorganizing? The classical purpose of refactoring is to improve understandability without changing functionality. Understandability is a nebulous term that floats about without shape; what exactly does it mean? A developer, who is only vaguely familiar with a program and has not written any of the code, is tasked with understanding how it works; how easy will that process be for him? An understandable program, relative to its complexity, should make that process easy. Understandability, therefore, is the degree of ease, relative to a program’s complexity, to which its logic can be understood.
Refactoring attempts to improve understandability without changing the functionality of the program, but what does refactoring look like. An extract method is the most salient example. A block of code is “extracted” from a method to create another method, and this method is then called from the original. The reordered code should be more understandable, yet the functionality has not changed. This concept of changing the organization of code to improve understandability without changing functionality as demonstrated by the example of an extract method is the core nuance of refactoring.
There is an exception to that nuance in my mind. Refactoring is not technically supposed to modify functionality, yet modified functionality can often lead to improved understandability. If changing the functionality of a program allows the developer to refactor that application in a more understandable way, then that modification is more refactoring than redesign. If that modification is so radically different as to be unrecognizable to the original, however, then redesign has occurred and not refactoring. Yet, changes to functionality that are not as extreme but aid in understandability may be refactoring and not redesign. Admittedly, this exception is subjective and belies the precision of the definitional process that I have described. Unfortunately, there is not precision in this matter, but stating the exception is still important as redesigns can actually just be refactoring that bends the rules a little. The exception of redesign as refactoring adds a key exception to the concept of refactoring.
However, I would like to add an exception to the one described above. The functionality of a program cannot always be tweaked even if the tweak makes the program better in every aspect. Refactoring through small changes in functionality may not be allowed. If a change has been identified that would make the program much easier to understand, then change should be floated by the customer. Do not make change before discussing it with the customer, and do not make the change if the customer does not want it. If the customer likes your idea, then by all means implement it. This is a key exception to remember.
In summary, the core concept and even the core nuance of refactoring are straightforward, yet the subjective exception that I posited provides a bit of ambiguity. If change modifies the functionality of the program slightly but definitely makes the program easier to understand, then that change is probably refactoring, but determining the when refactoring becomes redesign is definitely not precise. That imprecision adds complexity to the process. Imprecision aside, refactoring is generally easy to understand, yet still more complex than it appears.
There are many coders out there, but not all of them understand the thought process that leads to optimal code quality. I will endeavor to elucidate the thought process that leads to optimal code quality in the paragraphs below. This thought process is not unique to me, but I believe it to be the most natural way to write the best code possible consistently.
Every code project has dependencies, which create a critical path. That critical path allows the developer to draw a road map in his mind, as certain steps must be done in a certain order. This path helps guide the development process. This guidance allows the developer to code with an eye towards the future.
What is the benefit of having an eye towards the future? Coding within a bubble is often effective, but incompatibilities may emerge, as that code has to interact with other code. A failure to anticipate those interactions can lead to painful redesigns, which also lead to bugs. Bugs not only take up valuable time but they generally lead to a reduction in quality. For every bug that you discover and fix, there are several other ones hiding in the code. Clearly, shortsightedness has costs, yet those costs need not be borne.
The foresight provided by understanding what the code needs work with and what the entire code needs to do provides a vision of what compatibility issues may arise. As a result, the coder can write with an eye towards resolving future issues. This foresight leads to fewer bugs, which also leads to fewer hidden bugs. The result is optimal code quality.
Playing devil’s advocate for a moment, I would like to state that incompatibilities are not always guaranteed; instead, they can be potential incompatibilities. A given piece of code may have several potential incompatibilities. The solution to those potential problems can sometimes be mutually exclusive as well. Having a roadmap laid out does not resolve the mutual exclusivity of the solutions. Still, the foresight to understand that such problems may occur is better than being blindsided. The developer can make notes about the problems. Deeper redesigns to avoid the problem could also be jotted down as well. A developer who lacks any foresight will have none of that information. Ultimately, a developer with foresight may end up in a bad situation but the developer without foresight will be in a worse one.
Clearly, the foresight that a roadmap provides not only allows basic errors to be resolved but the impact of disasters to be mitigated as well. The product that emerges will be optimized as a result. Just as important, the headaches felt by the developer will be greatly reduced. That is an ancillary benefit, which is quite valuable as well.
As you read these words, I would hope that they lead to reflection on your own thought process. I hope that reflection will lead you to become more aware of your own flaws as well. I have certainly benefitted from examining my own methodology. My hope is that you receive the same benefit from reading what I have written.
An intermediate ASP.Net application will require more sophistication so a more advanced skill set is required. The hurdle of being familiar with programming and/or VB/C# has been cleared, but another obstacle stands in the way. In order to be an intermediate ASP.Net developer, the principles of object oriented and structured programming must be familiar to you. Without those skills, tackling the tasks below will be very difficult.
The skeleton already exists from the beginner’s application, so there is no need to start from scratch. This application will be tweak the previous one, which should serve as a good lesson as to why good practices are a lifesaver in the end. Considering that side note, let us delve into what this application should look like.
First, you will need to examine your data and figure out how it is structure. Depending on the data this could be a straightforward process or it could be incredible tedious. Either way, a data model should emerge from this study. The next step is constructing that data model, which should be easier than the first process. After you have completed the data model, the queries need to be mapped to those classes. At the completion of that step, intermediate database interaction has been mastered.
Merely interacting with database is not worth a whole lot. Besides, changing your queries to map to classes breaks your code. The final step rectifies that problem by binding those objects to the view. Now your software is function again.
The final piece of the puzzle as well as the least clear is the concept of business logic. In the previous application, logic consisted of data validation and database operations. The goal here is to add some more sophistication, but where and what should be added? Starting out with simple extensions is best. Ordering forms in a workflow or making certain things available based on persistent, changeable states are good examples. Storing user names, passwords, and using those to authenticate is also a reasonable hurdle. Try to tackle as least one of those suggestions. If you able to do so, then you are capable of building intermediate ASP.Net applications.
The question of “what to build?” is difficult to answer. Too small of a project may not help you master the skills described above, but too large of a project that may be unwieldy for a neophyte. Where is the happy medium between those extremes?
The skills described in the previous section fit rather neatly into a standard type of web site. Many “modern” websites are simply web fronts to manipulate and view data. That sort of site fits perfectly with the skills that a beginner should master. The processing mastering the skills described in the previous section will allow you to build this sort of site. Once the domain has been picked you are ready to begin.
What sort of functionality is needed? Forms to enter data for modification and entry as well forms for search are required. Different types of data will require unique forms. Pages to display the data after it has been entered or searched for are also required. As a side note, the form and the display functionality can be merged into one page. Make sure that the domain that you have chosen is relatively small as bigger domains require more forms and display functionality; more functionality leads to greater headaches. In addition, entry and search forms require data validation to ensure that user cannot break that functionality by entering bad values. Given a reasonable data domain, these functional elements should suffice in helping you master the requisite skills for an ASP.Net beginner