I found a word, scope creep, on an IT magazine and it's about having the scope of the project being expand without properly planed. Anything new will be added to the existing project except the time frame.
Anyway, I also happen to found there another form of creeping which is the responsibility creep. Programmer are being told to do something that's not within their responsibility. For example, locate files, do screen capture, do data comparison after database is restored.
Sometimes it just make me want to say nasty word (WTF?)
I wanted to write a topic on trigger on how bad the idea is to use this feature in database. Now, Tom from Oracle has written an article on it, which I find it best describe the downside of trigger. Below is my brief summarizing of that article.
1. Maintenance headache - Trigger will make code review challenging due the logic may be scattered across table. Update a field in one table may trigger 500 other tables hidden logic.
2. Incorrect Implementation - The trigger behaviour may work differently from the developer point of view. Especially under concurrent processes or multiple user environment.
The first day I started my work as a programmer, my senior has already teach me the rules about trigger, which is "avoid it at all cost". The reason he gave me was,
1. Chain reaction effect - Insert one wrong record and you have a possible hundreds of table being updated. To revert the wrong record, you have to revert those affected table one by one and by the way, the revert process might even fire new trigger. In the end, it's hard to control.
2. Debugging Headache - It is hard to trace the source of the bug. Example, where this record came from? You need to check every table's trigger which involved.
The one reason to use trigger, so far that I found, is for logging. To track what changes has been done to that particular table. Other than that, always stick to the store procedure when insert/update/delete record.
The "I don't know" imply here is not for avoid responsibility but to put down our ego and really mean it. Saying "I don't know" is excellent if it pair with:
I'll find out more and get back to you later.
I'll google it for you now and let us see.
I'll write a test script now. Let us test and see what are the result
I'm not too familiar with this subject. Perhaps [people name who familiar] able to help you.
However, giving an answer that we are not familiar/no idea/pretend that we know just because we are senior/supervisor/specialist (in the end, it still the ego problem). Later our colleague/junior/sub-ordinate google the problem and find out that we are bullshitting, we will lose our credit. They will think that we are a [label for a person who don't know the subject but act like an expert].
Instead of letting other people to think I'm a clown (my defination for that label), I would rather just say "I don't know, let us check the online documentation and maybe we can find something there".
This gave me puzzle. As we can see here, the same data is passed into the function and the return value for one is correct but the other is wrong. First I thought there is mysterious oracle bug and somehow the date format has changed or something. I ask for a senior to check for us and after a while we finally found where is the problem came from.
The SpecialDate() function retrieve the date value from a table which stores year 0008 and the hardcoded date value is 2008 (How oracle knows it's 2008 and not 1008/3008?). Both show year 08 on the screen but in truth, they both hold different value.
In conclusion, use YYYY year format instead YY because you might scratch your head for hours just to figure what went wrong.
I have seen a lot of Vista bashing on the internet but this one really happened to one of my family member.
Couple of months ago, my sister-in-law bought a new laptop (Dell inspiron 1420 with vista pre-installed) and from my point of view, I thought it was a Windows XP machine. To my dismay, she went for the Window Vista instead. Since it was her laptop and windows xp is going to phase out soon, I assume having a Vista pre-install may prolong her laptop lifespan and support.
One night, she came for my help after she faced a rather critical problem with her laptop. After the laptop booted up, the screen only showed black screen and a mouse cursor. I couldn't get the task manager (ctrl-shift-esc) to show up and neither ctl-alt-delete did any good. I hard reset the machine and Vista allowed me to choose the safe mode option. Ironically, safe mode showed me the version of the Vista installed is SP1 and again, it still blank screen and a mouse cursor (The new black screen of death?). Then her boyfriend told me that he had run the repair option for Vista but it still not working.
Anyway, I googled the problem (search for: vista blank screen mouse cursor) and found some information about this bug but there are no solution except to reinstall Vista. Some say it was caused by certain program or driver that are not compatible but for home user, how do we know?
In the end, I recommended the reinstall option to her and fortunately her boyfriend were willing to help. I also advised she get an external hard disk to store her personal files, so if it happen again at least she still have her files accessible. Beside that, I was tempted to recommend Ubuntu to her but on second thought, I know she is a type of character that doesn't like to change. Maybe I'll just wait until she is fed up with it then maybe Ubuntu will come to the rescue. :)
I remember I told my colleague that for starting as a programmer, we read a lot of syntax and rely on reference book most of the time. Something like a dictionary, where you can find the word for your writing. As time moves on when we started to get familiar with the syntax, getting another reference book will not improve our skill any further. To move forward, it about getting a new concept on how to write better software. More thinking is shift to the design work instead of the labor coding work.
And this book has both of them. It started with brief coding on the first half and the second part is more involve in design and coding concept. It also has a fairy tale chapter, which is my favorite chapter, because in order to use of what he had describe seems to be really a fairy tale. The interface and design in that chapter make sense but I have seen none of them being used so far in my working career.
So to my book review, as usual, it is separate to 3 sections.
1. Who should read it? Programmer starting to learn PL/SQL for the first time or senior developer who might want to get more insight on design.
2. What is it all about? SQL and PL/SQL. Beside just syntax reference, there are also chapter in concept for object orientation, debugging, testing and documentation. Object orientation gives me a great insight on how to utilize the object type feature available in PL/SQL. Of course, debugging, testing and documentation also play an important part in programming career. This book provide good example for all of them.
3. What so special about it? The book is written in non-too-technical way and a bit of humor, therefore it's fun to read. Although some of the joke I can't quite get it but it may put a smile on other reader. Beside that, this book will probably make me read it up again and again because there are more stuff to explore and learn. Fully competent in coding? Then try write better debugging tool. Got a perfect debug tool, then write better unit test. Had a 0% fail rate on your script, then write better documentation. No support call from user because of excellent document? Create a new product and start coding again. By the way, did I mentioned there is a fairy tale chapter?
Overall, the book is enjoyable to read and in the same time it cover all major topic required to write better PL/SQL script.
Currently, I'm reading a book from Seth Godin: Small is the new big, and there one statement that caught my eye for programmer. But before that, it mentioned about clean fire truck. The story is about the fireman waiting for the fire alarm to ring, which it seldom happens. What they do in their spare time was to clean up the fire machine and the author suggest they can use the time to spread about the fire prevention information instead.
So, if a organization is labeled as having clean fire truck, it means the company doesn't use its time wisely. This lead to the statement that I wanted to quote,
"Software engineers are often overwhelmed with an endless list of programming fires - and rarely get a chance to think about what they ought to build next."
The data migration I'm going to talk about is not on the whole process strategy but rather on the actual programming work. Data migration is more than just about transferring data from one place to another and there are couple strategies may consider to be included into the coding process.
1. Using permanent temp table Import raw data into permanent temp table allow more control on the data. For example, the temp table can have constraint rule created according to destine table or having primary key to avoid duplicate. Furthermore, transferring data within the database would prove to have better performance and provide more convenient way (no need to switch tools/editor) to troubleshoot the raw data. It also allow developer to filter the data and data checking using SQL before import.
Pros: Raw data stays in one place. Probably better data transfer performance and allow raw data checking using SQL. Cons: Extra space required to store the data. Therefore it depends on how much space is left in the server/machine and how large is raw data for the process.
2. Add extra id/reference field in temp table This allow to back track the raw data if there are some data inconsistency after the migrated data has been used for couple days. The value for this field is to copy from the destine table primary key/unique value into this reference field. Furthermore, indexes assign to this field might help to improve query performance.
Pros: To help developer to find invalid raw data or to track down bug Cons: To maintain an extra field might cause minor performance hit and again, more complex script required (Perhaps more fun for those programmer who like challenges).
3. Re-Runnable script Although is not a must, re-runnable script allows the migrate process to re-import again. Of course, re-run the migration script only picks up raw data that are fixed or troubleshooted or reference field that are blank (not imported). Therefore, this will shorten the time for second run. On top of that, having a re-runnable script will simplify the remote support process by asking the DBA to run the script again after raw data has been fixed.
Pros: If the migration process stop at half way, there's no need to clear the imported data. Just re-run the script will only picks up fixed raw data. Better support for DBA. Cons: Complex migration script is required and it should be a easy for programmer who wants to try more complex logic.
4. Cleaning/Validate script If the raw data obtained are all healthy, this section can be skipped. However, there are no such thing as perfect raw data for migration. There are cleaning up and business logic validation required to make sure that the data can be fit into the system without causing any error later on. There are pre cleaning, where invalid data will be either filter out or fixed before import. Then there are post cleaning, which data generate by the triggers that are not required, will be clean up.
Pros: Better data integrity and smoother migration process Cons: More rules to check means more work required.
5. User friendly screen output It helps the developer if the user/DBA able to report the error according to what's written on the screen. Hence, having a user friendly message or summary (e.g. no of records processed, successful import and failed) will help the developer to identify the possible cause of error. It also helps remote support since the communication will be much clearer on what's going on.
Pros: Helps the support process by having the user to report error according to the screen, which is what the developer is expected for. Cons: Sounds unnessary and possibly degrade programmer IQ to think of the user friendly message.
6. Logging Last but not least, is the logging process. It better to log down any activity performed so when there are unexpected error shows up, the log information will really help to troubleshoot the error. Of course, the error log can be in file or table structure and the data it holds should at least have rowid, date/timestamp, error message/code and recordid. The recordid allow us to trace back the raw data.
Pros: Having an history record of what have happened will sure help the debugging and support process. Cons: Migration performance will degrade if there are alot of file IO and table access. Therefore, it is best saved for error logging message only.
Conclusion Risk management should take into consideration on how to minimize the impact on the business process such as backup/restore strategies. Of course, I'll leave that to the DBA to decide. Nevertheless, the above strategies might help minimize the risk and perhaps improve the support process.
Currently I'm doing a data migration and the process is simple. Read the data from text file into temp permanent table in oracle, then transfer the data to the real time table. The temp table then will be updated a id number for reference to keep track which data has imported, so the script/procedure can be run again only for the failed records. A log table also created to log the import process, in case of any error appear for specific transactions the message is recorded.
Again, sequence of the imported data must be followed and therefore it's tempting to use the currval value to update the table. I was told that using currval will going to cause some trouble as environment is in real time and the sequence number might be changed when I use currval. The solution for this problem is simple: assign the sequence.nextval value to a variable and use that variable value instead of currval.
So what's the currval can be used of? Due to the value it holds can be unreliable and I have no idea who has call nextval and cause the value changed, therefore I believe it is best used for record count instead of max() or count() function. The benefit of this approach is maybe some performance improvement but more importantly, a more reliable application.
One thing struck me today while I was preparing a script for data migration project. But before that, let me briefly explain sequence in plsql.
Sequence in plsql is a feature that allow developer to keep track the key count. So, it save you from having a table of storing key value. The basic use of this sequence is simple. For example, I have created a sequence name s_employeeid.
To get the increament value: s_employeeid.nextval
To get the current value is: s_employeeid.currval
The syntax for insert a employee record may be look something like this: Insert into Employee (id, name, age) values (s_employeeid.nextval, 'Alexander', n_age);
Note: n_age is a number viarable.
From the sql statement, it worked rather straight forward. What happened to me today is when I want to assign the sequence to a viarable and in the meantime, I want to pass the sequence to a function. Something like below n_empid := s_employeeid.currval; insert_employee(s_employeeid.currval, 'David', n_age);
Both syntax return error. I was surprised and find no reason why both syntax would return error, as sequence and viariable both used exactly the same way. Is the viarable and the sequence has diffrent datatype? Nope, I even defined the variable exactly the same as the sequence.
It still won't work after couple of try. Finally it was like this: select n_employeeid.currval into n_empid from dual;
I only able to pass the sequence's value by using the dual table. Something like a dummy sql.
In conclusion, I don't like it. I thought that a feature that act and behave almost the same as a variable will allow direct variable assignment; but not in plsql. Why?
I rather stop asking.
Update: There are also one mind boggling syntax, which is common on using null in where clause or in if-then-else statement. Select * from Employe where name = null;
If employee.name = null then
Select * from Employee where name is null;
If employee.name is null then
I found null keyword is special and required a "is" word to handle it. :P
Update: To check your local time when is the file will be available, go here.
I don't want to be called cheating but I need 3 copies of Firefox. Not to mean to support their effort of setting the world record, but I need one for my Windows XP Home and Ubuntu, plus another one for my office used laptop.
So, click on the image above (or on the cute fox cartoon image on the right side) which will bring you to the their website. Fill in your region and email address, they will send you the email on the day of the download, as I was told.
Never use Firefox 3 yet? Check out their beta or release candidate, maybe you will be tempted to join the record event (or just plain evil to kill their file hosting server by download multiple copy at the same time)
Update: ThinkingRock has released a final 2.0.1 version for Windows. For Ubuntu, it can be run by install Java Run Time 1.6 and ThinkingRock 2.0 using Wine.
Searching the definition of GTD will sure lead you to result of "Getting Things Done". So what is it all about and why it matters? GTD is a organized way to handle your thinking and todo list. It helps by having 5 major steps, which is collect, process, organize, review and do. I'm sure that every programmer someway or somehow involve in project in their life, either personal or business. If you wanted to start a project and somehow find it disorganize and stressful, perhaps this model might help you to organize your thinking and put you into the right track.
ThinkingRock is a GTD application that is written in Java. The version that I'm currently using is 2.0 Epsilon, which is not out of beta yet. Although there are one or two bugs, however it doesn't hinder the whole managing todo list process.
Starting the application ThinkingRock requires Java runtime to be installed. Since it's written in Java, therefore it can be used in different platform. I have tried both executable in Ubuntu and Windows. Both works really well and there is no major difference on the interface.
The application start out in 4 sections. They are Setup, Collect, Process, and "Organize Review Do".
In the Setup section, there are 3 panels: Create Data File, Define Context and Define Topic. Create Data File is where we start. A data file is required to be created once else all function won't work without the data file created (It took me a while to figure it out :P). Next, come with the Define Context. This is a tricky part as I confused what is it for at first. In here, the process, medium or location is defined. For example a phone call. If I have a multiple actions (task/todo) with that required phone call, I can group them together and get them done altogether once I'm using my phone. Other example such as @home where all the actions group under this can be done together when you're at home. The final panel for this section is Define Topic. This is where the category personal, work, family or other being created. The action later defined can be categorized and managed under the appropriate topic.
Next, come to the Collect section where it only has one panel. Collect Thought. Straight forward, anything that pop into your mind goes here. Build a PC, pick up a technical book, post an article on your blog. Anything. Very soon you'll have a long list of thought in here.
Following to the next section is the process section. Here is where the thoughts being filtered as actionable, future, information and delete. Non actionable thoughts is put under future, where it can be retrieved back for in future to reprocess again. Or put it as information, where the thoughts is for reference purposes.
Organise Review Do
The actionable thought is then put under project (multiple actions) or single action. This come to the next section "Organise Review Do". There are 6 panels in here and there are Actions, Projects, Do ASAP, Delegated, Scheduled, Done Action. Actions Panel will list out all the action which are created in the process section. Here you can add, edit, delete actions or set those action to Do ASAP, Delegated or Scheduled. I believe this is the panel where you will click most of the time for review all the actions created. Either to re-schedule, delegate or complete the action. Project panel provides a project view and the rest of the panel is self-explanatory.
What I like about it. I have been using it about a month and I find ThinkingRock provide an easy to pick up interface. Of course, there are some confuses when using it but it all sorted after check out the help file provided.
Another point I like about is that you can copy the executable folder into a usb drive and run from there. It really comes in handy where you can carry all the projects/todo with you in one thumb drive with application. Of course, the machine must have java runtime installed else the application won't work.
If you have the book Getting Things Done, perhaps this application will give you a much greater sense from the book because now you can do it while reading the book.
Finally, I would recommend this to anyone, not just programmer, who want to organize a project or task in a more efficient way.
Below is a video from google where David Allen, the who invented the system, talks about GTD.
Update: Check the comment where there is a tip on install ThinkingRock into USB stick. Thanks Randy for the tip.
One word for this release, is more like a mixed feeling. You can either love or hate it, depends on how long you first play with it.
As a Gusty Gibbon home user, I will focusing this review on Firefox 3 beta 5 for surfing the web, audio and video player for the entertainment and word processor for writing my article. I have installed Hardy Heron for my Dell 640M from scratch where in my past experience, upgrade doesn't seem working well for me.
Love 1. Worked out of the box - No driver needs to be installed. 2. More stable flash player - Previously Firefox freeze after a while watching youtube or playing some flash games. Now it is fixed (with adobe flash player). 3. Transmission for BitTorrent - Love at first sight. It's so easy to use. It is good for them to take Transmission as Ubuntu default bittorrent client. 4. Date time with weather forecast - I live in Subang, Malaysia (I'm sure international reader doesn't know where that is.) and it has the weather forecast for it. Excellent. 5. Open office 2.4 load faster - The 0.1 version increased seems a lot performance improved to me (previous version is 2.3). 6. Unknown brand mp3 player worked for file transfer - It kept disconnected in Ubuntu Gusty Gibbon but now it worked. Almost. (See hate section below) 7. Easier codec installation - A more user friendly approach to install codec, where you don't need to go to the add/remove to install it. Listening to mp3 and watching avi file has never been easier.
Hate 1. File corrupted on file transfer to mp3 player - Happened only for the unknown brand player after some file transfer. Maybe it was caused by the cable which i used (my external hard disk cable). However, for my 2GB thumb drive it worked flawlessly. 2. Google reader - News item kept missing. It can be fixed by pressing ctrl+0 in Firefox. 3. Default flash player is sucked - I thought Firefox beta that does not support flash correctly. It was swfdec that caused the unpleasant surprise. Get the adobe flash player instead. 4. Wifi LED is dead - I thought Heron has killed my wireless or I have to look for linux drivers on the internet. No, it was not. It just the LED won't light up. I found out after I left my wireless on after restart from Windows and log into Ubuntu again.
Conclusion If you fall in the hate section by first couple of minutes of using Hardy Heron, there is a tendency of going back to Gusty Gibbon where Firefox is not beta, Wifi LED is working and no item missing in google reader. However, after get passed those issue a day later, life seems to be better again. Much better.
So, I would conclude that Hardy Heron it's a great release (most of the Hate can be fixed except one) beside having to compromise the wifi LED, which I hope Ubuntu team has a fix for it somewhere in future.
I read an article on the internet that programmer requires 2 basic skills, which is communication and learning. Communication helps programmer to get their job done more effectively and learning helps them to improve their coding skill, as we know that the programming language are never stop evolving. So, here's my review on this book which I find it useful.
1. Who should read it?
Anyone who prefer technical explanation and less story telling on communication skill. So, if you're picking a communication book for your next reading, perhaps you can check this one out.
2. What is it all about?
If you think communication is just about talking, maybe this book will give your more perspective than just talking. It divides communication skill into several sections and here I will pick out 2 sections, which is relevant to programmer.
Section 1 - Basic skill
Listening - An essential skill to find out what client, manager or team leader wants from our work. A mis-communication (e.g. missing or wrong functionality), will sure cause us to burn some midnight oil to fix the problem.
Self-disclosure - This one helps on team building. By having this skill, your team member able understand you and why some of the decision you've made.
Expressing - This chapter is about how to get your message (e.g. needs, wants) across to the people in direct, immediate, clear, straight and supportive manner.
Section 3 - Conflict skill
Assertiveness Training - Programmer are easily to fall into the trap of saying yes on everything throws at them. Learning when to say no will makes them work happier.
Fair Fighting - Conflict is inevitable. This skill is about resolving conflict in a more a healthy way.
Negotiation - Not just for your salary. Negotiation skill can be used for project requirement, functionality, man days and work load.
3. What so special about it?
It's like a mini communication encyclopedia, giving the general idea on most common communication skill. Although some of the contents in the book are taken from others, it is still a great book for first time learner. It also proves to be helpful for those who already has communication skill and want to move further(e.g. family or relationship communication). At the end of the book, there are reference for each of the chapter. So, for further reading, this book won't let you feeling lost on where to go to if you find one of the chapter interesting and looking for more information.
Other recommended reading.
Crucial Conversation 7 habits of highly effective people
I have to hold on my book review on communication as Ubuntu has release their latest version Hardy Heron last week. As most fan would do, I have downloaded it, installed it and now I'm going to talk about my first 2 minute (not exactly 2 minute, it just happen "2" is one of my favorite number) experience with it on my Dell M640 laptop.
Download the installation iso file, 2 days (3 hours per day). Install the system from scratch, 30 minutes. Log in and found out my wireless is not working, 2 minutes.
I have restart my pc into windows and check whether the hardward is fine. It's working alright. Restart back to Ubuntu 8.04, wireless is not working. The wifi LED is not light up at all not matter how I try. Checking the Bios, pressing FN+F2 multiple time and it still not working. So, in conclusion, I should have update the system instead because they do not include the proprietary driver for my build-in wireless this time.
Another thing I found is the system date. I selected the correct time zone (GMT +8) while on installation. After logged in into the system for first time, it showed the time with 8 hours ahead. I'm not sure whether it's a bug or the system expect every new installation will be in time zone GMT 0. Anyway, I played around the new date time feature and corrected the time.
In short, if you have a happy Feisty Fawn system and wanted to check out Hardy Heron, my advise is go for the upgrade. At least, it save you from being disconnected and go back to your store room to find back your long lost RJ45 cable.
Update: I went home and try to use my cable to go online instead. When checking out the network setting, I found the wireless network able to detect my wireless router except there is no indicator on the laptop. The wifi LED was shut but I able to get online. Weird.
Currently I'm working in Cyprus now and the weather is cold compared to Malaysia.
After working for a day or two, a client reported that they have a trace problem. The application does not work correctly due to the trace file is pointing to the wrong directory and the database is giving back errors.
So what is traces? What's the good and bad about it?
Traces is something like spy that monitor every program execution and put every step into a log file or report. A simple example is the windows xp event viewer where you can track an error event. In VB, we have debug.print. In Informix, we have set debug file and trace on.
The good about traces is that it helps programmer to track the error cause. It can show what went wrong, which module produces that error, which line of code (possibly) that causes that error and even trace the execution time for benchmarking.
The bad about it - is performance. Although it's not obvious in user environment but it really drag down the performance for server, which have a lot of transactions and running processes, due to the file writing process .
Further more is what struck me today with this trace issue: previous programmer let the traces all over the place and they all pointing to different directory. To help the client to solve this issue in order for my project to proceed, I have to track down each of the procedures (and sub procedure) to make sure all of them are trace clear. So another bad point for traces.
In short, traces are for hitman, who always cover their traces in their work. For programmers, better think like a hitman before using them.
Hey, I like mistake. Here I would like to point out the mistake that I found in my daily work
NULL value The number one enemy for SQL programmer. In one scenario, I have to debug a stored procedure where it process a table by loading all the column data into the defined variables. After some processes, it joins the variables into a long string in the end. Unfortunately, most of the columns are allow NULL attribute and all the assigned variables do not have NULL check. When one of the variables is NULL, the whole string will return NULL, which causes the program to stop. It seems like the original programmer doesn't care about the potential NULL or maybe it just too much work to have NULL check on every column. I admit that even me wouldn't like to spend my time by coding NULL check for every columns.
In general programming world, a variable is either yes or no, true or false, have and don't have. But NULL does not belong either of this world; it just mean unknown. Sometime, I feel it's like a black hole, where it suck up everything that touches it.
Although NULL is bad for programmer, I believe its basic advantages are saving disk space and provide faster searches due less data are stored. People may argue that in todays world, the cost of hard disk is lower and the speed is much faster compared to old age. This NULL issue can be fixed by setting default value in each column in tables. But I would rather see the database system would automatically assign a default blank value to variable instead of NULL. Having this rule means that the variable cannot contain NULL since it doesn't hold any valuable information or mean anything. Eventually there are no black whole, less programming error and it does not eat up disk space as well.
Sharing temp table Passing a collection of values from one stored procedure to another proves to be more challenging than in general programming. So far, I haven't seen any stored procedure that is able to use a collection/array as parameter in my work. So how do we pass a specific collection of data to another stored procedure for process? In one of the convention way is by using temp table. When a temp table created in a session, every executing stored procedure in that session is able to look into that temp table for data. This means that temp table is like public table, except that it will be dropped once the session is end. The draw back of using temp table is that: if you need to change the temp table column attribute such as char(4) to char(8), every variable that hold that column's data in every stored procedure needs to be changed, which caused a lot of effort in maintenance.
The advantages of having sharing temp table is better performance and simpler coding. But once it is misused, you will need a lot of caffeine and patient to fix it when the system becoming more complex.
My work requires me to code SQL stored procedure because 90% the business process logic are in the server rather in desktop application. Subsequently, when the standard product has improved or many bug fixes has been integrated, a new patch will be released to the customer. This has created a nightmare for the developer when releasing those patches. Every customer's customization was done by developer; coding straight into the standard product's stored procedure without concern. Indirectly, this has created every customer has their own version of stored procedure. To let the customer's server has the latest fixes, every stored procedure is required to compare with the standard in order to find out what are custom code has been implemented, in order to bring them over to the new version.
Below are my stored procedure opinion.
Thebenefits: 1. Minimize network traffic - Example, processing large transaction table which required each transaction meet the business logic verification. There is no need for the data to travel from server to the client's application to verify and then back to the server again. 2. Better performance - Since the stored procedure sit inside the database, which is closest to the data, no doubt it would have better performance compared to client application. 3. Simplicity - Stored procedure is procedural execution, therefore it should be easy to read. There are some exclusion where one stored procedure perform 2 different functionality. 4. Easier deployment - Update the stored procedure will result every user has the latest fixes without having to setup on each user's machine. On the other hand, for web based application this does not count as benefit.
The drawbacks: 1. Duplication - Example, a stored procedure (A) with query M table to return 5 number of fields and you need to create another same stored procedure (B) on M table to return with 6 number fields. Nevertheless, (B) will have almost the same code as (A) except the extra 1 field. So (B) will duplicate of (A). On the other hand, I can execute (A) in (B) just to add the extra field but that will query the M table twice, which is a performance hit. 2. No version control - Stored procedure does not have version itself unless there is comment stated in it. 2 different customer having the same comment version number does not mean they have the same fixes. Developer increase the number once there is fixes or customization implemented. 3. Heavy maintenance for customization - If there are 100 customer and each of them have 5 different customization in their standard stored procedures, there will be 500 different store procedure to look into. Ideal state was to have the customization separate from the standard code; however, it is hard to make it so as the custom code is so tight up to the standard code. 4. Lack of design security (except oracle PL/SQL where the code can be hidden from customer) - The customer can read the stored procedure, therefore some of the design might leak out.
In conclusion, I would say stored procedure are for heavy processing task or for static business logic. Putting customization in stored procedure is definitely a bad move if there are a lot of client and each of them have their own custom code. To solve this problem, I would recommend to put the customization into client's application(preferably web application to save deployment effort) instead of stored procedure, so everybody living happily ever after. :P
It's hard to believe that Microsoft giving away their on-shelve-product for free (currently, it is limited to student for academic use only and will be available in 6 months). Free product including developer tools (e.g. VS2008 & VS2005), designer tools and platform resources (e.g. MSSQL 2005).
I wonder whether home user can be considered as student. Hopefully, they include home user as part of the program(therefore, I don't have to install multiple Express edition product in my machine). Nevertheless, it's a good news for student who wanted to learn about Microsoft technology.
Object Oriented Programming(OOP) is common in todays programming language, such as .Net, Java and C++. Therefore, it is common to use OOP to solve business logic problem via creating business object(BO). On top of that, database design also cater for OO in order to easily interact between data object and BO. Mapping BO to UI/form somehow has not been taking too much consideration. Here, I would like to share the basics(I think) form object, which I learn from my senior (Danny Lim) in my working experience and I find it worth sharing.
First, business process are stored in methods instead under events. E.g. Click on save button and insert a new record inside that event method. The appropriate way should be click on save button and call the save record method. This way the save record method can be shared by others and the UI is only for interaction, not for operation. Therefor, the UI only capture the user input and pass/read value to/from the BO. Below is the list of methods and properties of the form object
1.Public Property FormState as enum After the form object is created, the FormState will be assigned with enum value of Add, Edit or View(or more if you want to). The FormState is for set up the UI control. With Add state, all controls on the form are cleared. Edit, value are load to the control. View, controls are disabled for reading. This also mean one form is used for 3 different operation. (Delete does not require a form, usually)
2.Method SetControl Based on the FormState, the controls will be set accordingly. Example: Add state, all controls are enabled. Edit, partial controls enabled. View, all disabled.
3.Function ShowForm (business object) return boolean When the form object is created and called, BO is passed into for process. The return boolean is to confirm the process is successful.
4.Method ObjToUi Object value is mapped and passed to UI controls.
5.Method UiToObj UI control value is passed back to BO.
6.Function Validate return boolean After user click on save, value on controls will be validated and business object validation logic stored in the object will be called and verified.
The date fall on June 30, 2008. Microsoft will stop selling Window XP as according to Windows Life-Cycle Policy. Support will end on April 14, 2009. This mean no further security updates or hotfixes will be automatically download to you computer on April 2009 onwards.
Sad but it's a fact that somehow people have to cope with it. Below is a list of what would happen after the support ends.
Continue using window xp: There might be a risk of having your data getting lost or stolen due security vulnerabilities and computer bugs in the future. Or use window xp in offline environment. That would cut down the risk factor by half.
Upgrade to Windows Vista: There are good feedbacks from some of the user but mostly I heard are the bad ones. Therefore, try it before upgrade.
Move to Linux or Macs: Install Linux OS in your current machines and start to get familiar with it. Or buy a Apple Mac machine on your next computer machine, if point 2 is not your option.
Finally, sign up a petition: Go to Save Windows XP, sign up the petition so that the people in Microsoft would reconsider to prolong the support. However, I would doubt it since they not earning money.
As for me, I will go for option 3 because I'm quite happy with my Ubuntu.
When I found out there's no IIS for Window XP Home, I was kinda regret for not getting the Window XP Professional. This has lead me hard to pick up ASP.Net (just for fun) as there is no way to host my asp page in my laptop. I even went all the way to check out apache and hope there is a module for it to support ASP.Net 2.0 but it seems the community has dropped the project.
Fortunately, as I was browsing thru this book Build Your Own ASP.NET 2.0 Web Site Using C# & VB by Cristian Darie and Zak Ruvalcaba, there is an IIS for Windows XP Home called the Cassini. That's good news for me because I always wanted to try out ASP.net.
On todays evening family dinner, we went to one of the famous Chinese restaurant in SS2 area. The business of this restaurant is so good that it occupied 3 shop houses, span across 3 floors and each floor can served around 100 customer. By the time we reached there around 8 PM, all 3 floors were almost fully occupied. My mom-in-law point towards the kitchen, which is a open concept and separate by a glass window, said, “Look at those staffs, there are like 20 of them”. Yup, she was right and all them were busy preparing and cooking the meals.
On the main entrance, a receptionist greeted us and asked how many person will be dinning. Then, he gave me a card with 2 numbers. One on the upper right corner stated a 10 with circle around it and a number at the center of the card. The center number was table no. 325 and the circled number was the size of the table. A waiter showed us to our 10 seated table and the card was taken away. I was shocked. For a business with this size, their internal system are not run by computer but by paper card.
So what's the big deal?
The first thing came into my mind is: How do they keep track the vacant tables of each floor? How do they know they assign the right size of table to the right amount of people? The receptionist can't look through walls and tell which size and tables are vacant. The workers or waiters just can't go and tell the receptionist every time there's a vacant table because they might forget and time consuming. The receptionist can't count on the bill that has been paid to check the tables because people might paid the bill but stay for chatting for couple of minutes.
Therefore, the card holds a valuable information, which is the 'table status'(on my guess). As long the card (I guess they are stored at each floor) is not in the receptionist's hand the table is then assume as occupied (assume no card missing as well). The card is pass back to the receptionist once the table is ready and he will flip through the cards and give the correct table size to the right group of customer.
When I first saw the size of this restaurant, I was so tempted to jump onto the conclusion how much this restaurant needed to operate using a computer system. But I was wrong, after seeing it operate so efficiently just using 3 x 2 inch paper.