By giving an upper and lower bound when estimating something you are explicitly entering into the field of risk management (and in parallel to the field of adult project management).
The technique is very simple: Give two numbers. Give those numbers in which you are 90% confident that the real value (real time spent) will fall into.
Think it through that by having this range you just have much more information then having a single "ideal" number (e.g. ideal men days) or something uncalibrated and incomparable figure (story points). Have a look et the following examples:
developing the input form takes 3 story points
developing the input form takes 5 ideal days
developing the input form take from 4 to 6 days and I am 90 confident in it
developing the input form take from 2 to 8 days and I am 90 confident in it
Which one of these 4 examples has more information about the delivery date of that feature? It is obvious and it is using the language of our client (none of the economic schools is teaching about story points but they are teaching statistics very deeply).
Dead simple but not always easy. There are few issues, biases you have to keep in mind and try to avoid to make this technique effective.
Anchoring bias: Once we have a number in our head we tend to gravitate toward it. Even if the anchor number is completely unrelated. A typical example when manager says that he thinks that it will take approximately 2 weeks to complete you will come up something close to it especially when it is not impossible. Without such a "anchor" you might come up with totally different figures…. (most probably more) (Thinking Fast and Slow - Daniel Kahneman)
Some estimators says hat when provide ranges, they think of a single number and then add and subtract an "error" to generate their range. It makes estimation too narrow so overconfident. Looking at each bound alone as a a separate binary question of "Are you sure 95% sure it is over/under this amount?" cures our tendency to anchor.
— Douglas W. Hubbard
The solution what is working for me is to reversing the anchoring effect. The technique is very simple. Do not think about the number. Instead of starting with a point estimate and then making it into a range, start with an absurdly wide range and then start eliminating the values you know to be extremely unlikely. It is called "absurdity test"
Example: I want to estimate a simple form with a dozens of input and i want to input validate and store the content. I start with extreme lower bound 10 minutes and extreme upper bound of 1 month. Then I am asking a question to myself: "Am I absolutely (95%) sure that it takes minimum 10 minutes?" Of course the answer is no (just reading the story and test cases takes longer). So I stat calibrating. "What about 1 hours?", "What about 5 hours?". Sooner or later I will reach a figure I am not sure that it is not possible. And the same about the upper bound.
Tests are code. Code is buggy. Ergo… tests will contain bugs. So can we trust our tests? Yes, and especially so if we’re careful. First of all, tests are usually a lot smaller than the code they test (they should be!). Less code means fewer bugs on average. If that doesn’t give you a sense of security, it shouldn’t. The important thing is making sure that it’s very difficult to introduce simultaneous bugs in the test and production code that cancel each other out. Unless the tests are tightly coupled with the production code, that comes essentially for free.
The apparent inability of I.T. people to accurately estimate the effort, time and cost of I.T. projects has remained an insolvable problem. … poor estimation is one of the major factors in the breakdown of relationships between I.T. people and their clients.
However, in the age of outsourcing and increased competition, the need for I.T. people to more accurately estimate the costs and time-frames for new product delivery has emerged as a critical survival factor for many I.T. groups
Simply, poor estimates lead to a lack of credibility and poor business relationships.
almost all research into improving software estimation miss a vital point: it is people who estimate not machines.
our research has shown that within certain conditions, I.T. people are pretty good at estimating. … t the major precondition for improving estimation accuracy is the existence of an estimation environment free of inter-personal politics and political games.
The good news is that I.T. can estimate better. The bad news is that there are lifetimes of games and refining of games that have to be avoided to do this. .
Doubling and add some
Simply, you figure out [however you can] what you think the task will take and then double it. So a 5 day task is quoted as a 10 day task
Of course, the problem with this game is that everyone knows it why novice players are often caught out by bosses/clients … The other problem is that it never stops. In a version of bluffing as seen in poker, no one knows who has doubled or who have multiplied by eight and so on
Much later, when I was researching material for project management, I found that time and motion studies in the 1950’s had shown that the average lost time [meetings, waiting, talking and so on] in office work was around 50%. So the doubling game was based on some sound research.
Reverse Doubling Option
This is the reverse of the Doubling Game. Simply, the boss or client doubles the estimate that he or she is about to give management or business clients and informs the project manager or programmer analyst that the timeframe is half the timeframe that the boss has told the clients.
The Price is Right/Guess the number I’m thinking of
Boss: "Hi, Mary. How long do you think it will take to add some additional customer enquiry screens to the Aardvark System?"
Here the boss or client is being very nice almost friendly.
Mary: "Gee ….. I guess about 6 weeks or so."
Boss: "WHAAAT!!!! That long!!! You’re joking right?"
Mary: "Well, let me think ….. OK, I’ll do it in 3 weeks."
The reality is that the boss has already promised the client XX that the enhancement will be done in 3 weeks but the power of the game is to get the project manager or victim to guess the bosses estimate and then say the estimate [preferably in the presence of witnesses such as other team members]. Notice, it was Mary who said 3 weeks not the boss.
This is a truly excellent game for bosses.
Double Dummy Spit
The X Plus Game
This game is very important in all large organisations and is rooted in the hierarchical power base.
Basically, the person who is either requesting an estimate or informing the team of an estimate/deadline that has been already decided, invokes or blames someone who is "higher up"in the organisation for the fact that the pressure is being put on the team.
Boss: "Look, people, I’m sorry to tell you that you have only 4 weeks to develop the new operating system but, Ms. Bigshot has demanded it by then."
The key to his game is that the Boss is a Level 22 [X] and Ms Bigshot is a Level 32 and is much higher in the organisation [X Plus] than the boss.
meeting is called to discuss some innocuous topic such as what cookies are to be bought for the coffee breaks. The underlying purpose of the meeting is to get the victim into a room with lots of witnesses to provide the peer-group pressure.
Low Bid/What are they prepared to pay
Suspecting that the $10 million is going to be too much for the business group and wanting to undertake the project because it involves both a high organisation profile and interesting new technology, the project manager deliberately reduces the estimate to some number [say $4 million] that he or she believes the business client will accept.
Gotcha/Playing the Pokies
Extremely advanced estimation game players also learn that the best option when playing the Low Bid/Gotcha game is to delay telling the client that they need to spend additional money until the last moment and to repeat the process many times using smaller increments of $1 million instead of a big $4-6 million hit.
Client: "Hello Project Manager, will my project be delivered next week as promised? After all you have been telling me that things have been going well for the past year and the $4 million that I gave you has been used up?"
PM: "Well, I have some bad news and some good news."
Client: "Uh huh. Give me the bad news."
PM: "The bad news is that the system won’t be ready next week."
Client: "WHAAAT! $$#@@@!!!!!"
PM: "Wait. The good news is that things are going well and if you can find another $1 million we will deliver in 2 months."
Client: "Well I guess so… I don’t have much choice do I?"
Repeat until $6 million is spent or the project manager and/or the client is fired - any way the client looses.
While many people would think that project managers playing this game get fired a lot, the reality is that many organisations recognise that the loss of a project manager can lead to serious project problems. Given that this game is played by experienced project managers, they are often too clever at political games to be fired.
Smoke & Mirrors/Blinding with science
This advanced game is helped by the development of complex estimation techniques such as Boehm’s COCOMO, Putnam’s SLIM and Function Point techniques.
Client: "How long will the Aardvarker System take?"
PM: "Let me see. You have 22 External Inputs, 4 Logical Internal Files, 5 concatenated Enquiries … hmm.. that’s 8 by 24 plus 12 minus risk adjustment, add the Rayleigh Curve simulation, subtract because of the hole in the Ozone layer …. 50 weeks!"
Client: "Totally awesome!"
Client: "How long will the Aardvarker System take?"
PM: "Let me see. You have 22.1 External Inputs, 4.8 Logical Internal Files, 5.001 concatenated Enquiries … hmm.. that’s 8.02 by 24.002 plus 12.4 minus risk adjustment, add the Rayleigh Curve simulation, subtract because of the hole in the Ozone layer …. 49 weeks, 1 day and 3 hours plus or minus 1 hour !"
Client: "Totally awesome!"
Of course, readers will understand that at the time the "scientific" estimate was made not even the client clearly understood their own requirements.
It’s time to stop playing and start estimating
We must all become part of the elimination of these games. They hurt our reputation with our business clients [many of whom have also learnt to play them]. They result in our organisations investing money and time in projects that are not good investments. Most importantly, they screw up our projects and we all have to work hard and reduce quality to justify them.
Even if you can’t stop your managers and clients from playing estimation games you can certainly stop playing them with your colleagues and team members.
Maybe there will be a new generation of project people who are not taught these games. It’s up to you.
How long does it take to put all the dirty place into dishwasher?
How long does it take to do the regular weekend shopping?
How long does it take to got to work?
What is common in all of the examples that you could give quite accurate estimate about them. But it was not always true. You were not able to tell how long it takes to get to the school when you have just moved to the new town. You had to go a few times then you have collected enough real life experience to answer the question.
And the same can be applied to software projects.
How long did it take last time:
to implement new database query?
to create new form with a dozens of fields with input validation?
to implement new web controller (independently from technology stack - struts, struts2 or spring mvc)?
to set up a new development environment?
to set up a new server?
to implement customization of the core product to a new client?
For the first time the answer is always: "I do no know." But the second time you already have some reference you could use.
Each time we introduced our product to new client there were a need for certain customization. As development manager I was asked all the time how long it will take to customize the new client version? Most of the time this question is asked before knowing anything specific about their needs. But my estimation was always accurate because we did such a customization many times and I had my records. When I got this question I answered: "as long as it is taken for client X"
Why is it so powerful? When you have at least 5 past data (at task level or User story level it is not even a big number; in larger scale it might be problematic) you can be sure that any future data will fall into the range of min and max of these samples with 90% confidentiality.
Rule of Five
There is a 93.75% chance that the median of a population is between the smallest and largest values in any sample of five from that population.
— Douglas W. Hubbard
Why is it not used more often?
no records about actual execution time
non-experienced developers either in business or technology
crappy code make estimates unpredictable
If you do not have historical record about your executions you should start collecting them. Many times you do not really need precise bookkeeping about time spent. I am sure that you are remembering how long certain thinks were taken in the past, at least approximately.
New developer is always an something unpredictable. If developer is beginner you should not count on him until he proved to be able to deliver reliably.
Uncle Bob in one of his interview or Clean Coders video were talking about why there are not so many old and experienced programmers. They are there but every year there are so many new, freshly "graduated" programmer that you could not recognize them. They made a quick estimate what is the approximate proportion of experienced developer available and they found that 80% of developers have less then 5 years of experience, worldwide.
What are you expecting from someone having less 5 years of experience in large scale? After how many years of experience can someone made brain surgery or build a bridge over a river, or just simply construct a family house without supervision? Hiring newbie developers is a risk you have to control. You have to invest into hiring more expensive but experienced developers. And do not forget the 10x effect of individual developers.
The lack of business knowledge can be controlled. It is not possible to have a whole development team without the business experience (of course it can be but in that case it is an extraordinary bad management decision). In this case experienced developer is responsible for estimation. Of course, a business training is always needed too. If none of these solutions are working you must pay a big money to hire developer with the relevant business experience. Then you could return to the first solution.
New technology. Hmmm… Do you really need to introduce new technology? As an example. Nowadays the so called Big Data problems triggering the use of so called NoSQL databases. But as I see most of the business does not have big data problem. Most of the business has simple database and SQL tuning problems.
If you say that new technology is a must then the solution is the same as described in the previous paragraphs: experienced one is responsible for the estimates of the less experienced one; training; hire the master of technology.
Crappy code: Sucker. I do not know how to deal with it effectively. This is not a problem when brand new code is written because it does not exists yet. But when you are altering existing shit around you…. From professional point of view I know how to improve the quality of shitty code but I do not know how to bypass unreliable estimates it is causing.
I started to read in english but later I continued in hungarian. See later as
Clear, easy to read but still sciantific book how willpower is working and how to improve it.
It is a must have book.
It goes hanad by hand with the book "Thinking, fast and slow" by Daniel Kahneman
Combining the best of modern social science with practical wisdom, Baumeister and Tierney here share the definitive compendium of modern lessons in willpower. As our society has moved away from the virtues of thrift and self-denial, it often feels helpless because we face more temptations than ever. But we also have more knowledge and better tools for taking control of our lives. However we define happiness-a close- knit family, a satisfying career, financial security-we won’t reach it without mastering self-control.
Miért nem teljesülnek az újévi fogadalmak? Miért fulladnak rendre kudarcba a fogyókúrák? Miért bizonyulunk gyakran képtelennek arra, hogy azzal foglalkozzunk, amivel szándékunkban áll? Hogyan vált korunk emberének egyik legnagyobb problémájává az önkontroll, illetve annak hiánya? Mi voltaképpen az akaraterő? E kérdések megválaszolására nagyon sok, sokszor meglepő eredményeket hozó tudományos vizsgálatot végeztek az elmúlt évtizedekben. Kiderült például, hogy izmainkhoz hasonlóan az akaraterőnk is kimerül, ha túlerőltetjük, ezért ügyesen be kell osztanunk – viszont fejleszthető is. A „többet ésszel, mint erővel” és a „rend a lelke mindennek” népi bölcsességek szellemében számos tippet kapunk ehhez a könyv lapjain, többek között a határidők betartásával, a tennivalók listájának csökkentésével, a figyelemösszpontosítással és a káros szenvedélyektől való megszabadulással kapcsolatban. A szerzők a legfontosabb kutatások leírása és az eredmények elemzése mellett híres emberek élettörténetéből vett példákkal teszik olvasmányossá és közérthetővé mondanivalójukat.
Ha-Joon Chang: 23 dolog, amit nem mondtak el a kapitalizmusról / 23 Things They Don’t Tell You About Capitalism
Vevy populist and it is just a "review" (it is describing why thinks were happening and the reason why it is wrong after it happened). On the other hand it is highlighting the weakness of capitalism.
Did you know (for example) that all theory of smart guy getting Nobel prize failed after getting the prize? There were so many of us start using their theories and started to control economics that finally it turned out that it is not working and cause disaster. As I have mentioned this is just one single interesting fact as an example.
Exactly the same as described in Vezetői időgazdálkodás. If it would not cost 1 pound I did not buy it.
Risk management is project management for adults.
On the other hand I am not sure that I will use all the technique described here. But it helped me understanding why risk management is extremely important and have an other reason not to like Story Points.
Any software project that’s worth starting will be vulnerable to risk. Since greater risks bring greater rewards, a company that runs away from risk will soon find itself lagging behind its more adventurous competition.
By ignoring the threat of negative outcomes—in the name of positive thinking or a Can-Do attitude—software managers drive their organizations into the ground.
In Waltzing with Bears, Tom DeMarco and Timothy Lister—the best-selling authors of Peopleware—show readers how to identify and embrace worthwhile risks. Developers are then set free to push the limits.
You’ll find that risk management
makes aggressive risk-taking possible
protects management from getting blindsided
provides minimum-cost downside protection
reveals invisible transfers of responsibility
isolates the failure of a subproject.
Readers are taught to identify the most common risks faced by software projects:
Packed with provocative insights, real-world examples, and project-saving tips, Waltzing with Bears is your guide to mitigating the risks—before they turn into problems.
Objective: Good book about requirement. There not so many really practical hint how to collect and manage.
Subjective: Very little value to me. I had most of the knowledge already from other source and experience.
As short as valuable.
Must have for project managers.
One of the best book in the subject. Nice and realistic examples are used to describe design patterns and basic software development principles.
And it is not only about design patterns. There many section about object oriented programming and design, best practices, development principles etc.
Extended studies over If Dogs Could Talk and funny stories about the intelligent of dogs. I love it.
The first time I read (few years ago) I found not so valuable. After the second read I have recognized why it is describing the truth. After the third time (nowdays) I could appreciate that all facts and fallacies are supported by facts, studies and statistics.
For not professional developers only. It is just an introduction into many professional practices.
This book provides an overview of tools and techniques used in enterprise software development, many of which are not taught in academic programs or learned on the job. This is an ideal resource containing lots of practical information and code examples that you need to master as a member of an enterprise development team.
This book aggregates many of these „on the job” tools and techniques into a concise format and presents them as both discussion topics and with code examples. The reader will not only get an overview of these tools and techniques, but also several discussions concerning operational aspects of enterprise software development and how it differs from smaller development efforts.
For example, in the chapter on Design Patterns and Architecture, the author describes the basics of design patterns but only highlights those that are more important in enterprise applications due to separation of duties, enterprise security, etc.
The architecture discussion revolves has a similar emphasis – different teams may manage different aspects of the application’s components with little or no access to the developer.
This aspect of restricted access is also mentioned in the section on logging.
Theory of logging and discussions of what to log are briefly mentioned, the configuration of the logging tools is demonstrated along with a discussion of why it’s very important in an enterprise environment.
What you’ll learn
– Version control in a team environment – Debugging, logging, and refactoring – Unit testing, build tools, continuous integration – An overview of business and functional requirements – Enterprise design patterns and architecture
Who this book is for
Student and software developers who are new to enterprise environments and recent graduates who want to convert their academic experience into real-world skills. It is assumed that the reader is familiar with Java, .NET, C++ or another high-level programming language. The reader should also be familiar with the differences between console applications, GUI applications and service/daemon applications.
Very populist book.
Each of the section is composed as National Geographic movie. There is some interesting story which is artificially made "shocking" to the reader. Then it describes the "fact" and science behind.
The most valuable part of the book is the last section. The whole book is about describing why habits are ruling our life. The book is describing how they are working and why it is so difficult to change (you could not ignore habit but you could change the action to be taken).
And the very last section is giving some hint how to change habit (I am sure that most of us are buying this book for this section only). The real advice is that you should not change the trigger of the habit. You could change the action taken to something less frustrating. And how? Do some experiment until it is succeeded. So try until you become successful.
I had higher expectations.
Szerencsés fiatal fejlesztő voltam. Az első három hónapban elértem a 90%-ot az ütemezés játékban. Magamért tettem.
Úgy becsültem, hogy az adott feladat 6 hétig fog tartani. Természetesen mivel arrogáns és naiv fejlesztő voltam soha nem történt meg velem, hogy a feladatot apróbb darabokra tördeltem volna. (Az megmondta volna nekem, hogy egyáltalán tudom-e hogy mit kell tenni a feladattal.)
Az első hét végén 20%-ban kész voltam. A második hét végére 40%. A harmadik hét végére 60%, a negyedik végén, pedig 80%. Az ötödik hét végére már 90%-ban kész voltam. A hatodiknak a végén 92%, a hetediknek, pedig 93%. Mind az idő, mind én haladtunk előre. A tizedik hét végére 97%-ban kész voltam. De ekkor azt hittem, hogy egy héten belül kész leszek — csak három darab egynapos feladatom volt hátra. Ez még további 2 hétig tartott. Összes 12 hét a 6 hetes feladathoz képest.
Miközben 92%, 93% és 94%-ban kész voltam státuszjelentéseket küldtem a menedzseremnek, amiben elmagyaráztam, hogy váratlan problémába botlottam és hogy nem tudom megbecsülni mind azt, amit tenni kellene. A menedzser jól ált hozzá és csak azt mondogatta "Ok, csak tudasd velem az aktualizált becslésedet!"
A feladat végén, mikor végre teljesen befejeztem készen állt, hogy továbbmenjen. Mondtam neki, hogy mostantól máshogy szeretnék becsléseket készíteni, sokkal részletesebben, és több leszállítandó dologgal minden héten. Ha nem tudok dátumot adni, az jó lesz neki. Beszéltünk és megegyeztünk, hogy minden becslési dátumokat kockázati faktorral látjuk el.
Szeretném azt mondani, hogy tökéletes becslő lettem, De nem. Még mindig csak tanulom a becslést. De azt tudom, hogy amikor azt gondolom, hogy 90%-ban kész vagyok, az lehet hogy csak 50%.
Nem én vagyok az egyetlen. A 90%-ban kész ütemezései játék bármilyen feltételek között bekövetkezhet: ésszerű vagy ésszerűtlen ütemezésnél, alacsony vagy magas kockázatú technológiáknál. 90%-ban kész arról szól, hogy megjósolom a jövőt, hogy helyes becslést tudjak adni, és hogy értesüljünk — előre — ha megtorpanunk. Nem egy triviális probléma.
A 90%-ban kész ütemezési játék az oka annak, amiért szeretek visszajelzéseket kapni a munka során. See, see, see, see
(Fact 8.) One of the two most common causes of runaway projects is poor estimation.
(Fact 9.) Software estimation usually occurs at the wrong time. - at the beginning of project.
(Fact 11.) Software estimates are rarely corrected as the project proceeds.
Consequence: Re-estimate your issues!
This an an extremely powerful technique to control estimation and schedule plan.
Take your time (at 1/3 of the estimates project schedule) and review all task/story remaining and adjust estimates based on experiences. At that time you have gained a solid experience in domain, technology so your estimate will be much more reliable.
What is more: it is absolutely independent from any other estimation technique. It can be applied on any kind of estimates: ideal man day, story points, confidential ranges etc.
You might be afraid of spending huge amount of time on estimating again. But it is wrong. The first estimation takes long but if you analyze how you are spending your time you must recognize a certain pattern: it takes more to understand the issue then doing the estimate.
But the second time you do not need to understand the domain. You can focus on estimation only.
Based on my experience it takes fraction of time comparing to the time spent on the first estimate.
Several months of work can be reviewed/re-estimated in 1 hour with team. It means it can be really frequent. My experience is that after the first or second re-estimation no more re-estimation is needed because the domain become clear the estimations are not reducing risk significantly.
I have ambivalent feeling related to this book.
First time I have read I thought that this book is obvious. (At that time I was reading many of the book it is referring to).
The second occasion I have realized that from an objective point of view it is a great book.
Reading the third time (after partially reading "How to measure anything") I had to admit how important the book is. It is full of evidences, statistical facts, studies to prove how valuable each of the statement it has.
The most important factor in software work is the quality of the programmers.
The best programmers are up to 28 times better than the worst programmers.
Adding people to a late project makes it later.
The working environment has a profound impact on productivity and quality.
Tools and Techniques
Hype (about tools and technology) is a plague on the house of software.
New tools and techniques cause an initial loss of productivity / quality.
Software developers talk a lot about tools, but seldom use them.
One of the two most common causes of runaway projects is poor estimation.
Software estimation usually occurs at the wrong time.
Software estimation is usually done by the wrong people.
Software estimates are rarely corrected as the project proceeds.
It is not surprising that software estimates are bad. But we live and die by them anyway!
There is a disconnect between software management and their programmers.
The answer to a feasibility study is almost always “yes”.
Reuse-in-the-small is a solved problem.
Reuse-in-the-large remains a mostly unsolved problem.
Reuse-in-the-large works best in families of related systems.
Reusable components are three times as hard to build and should be tried out in three different settings.
Modification of reused code is particularly error-prone.
Design pattern reuse is one solution to the problems of code reuse.
For every 25 percent increase in problem complexity, there is a 100 percent increase in solution complexity.
Eighty percent of software work is intellectual. A fair amount of it is creative. Little of it is clerical.
One of the two most common causes of runaway projects is unstable requirements.
Requirements errors are the most expensive to fix during production.
Missing requirements are the hardest requirements errors to correct.
Explicit requirements ‘explode’ as implicit requirements for a solution evolve.
There is seldom one best design solution to a software problem.
Design is a complex, iterative process. Initial design solutions are usually wrong and certainly not optimal.
Designer ‘primitives’ rarely match programmer ‘primitives’.
COBOL is a very bad language, but all the others are so much worse.
Error removal is the most time-consuming phase of the lifecycle.
Software is usually tested at best to the 55 to 60 percent coverage level.
One hundred percent test coverage is still far from enough.
Test tools are essential, but rarely used.
Test automation rarely is. Most testing activities cannot be automated.
Programmer-created, built-in debug code is an important supplement to testing tools.
Reviews and Inspections
Rigorous inspections can remove up to 90 percent of errors before the first test case is run.
Rigorous inspections should not replace testing.
Post-delivery reviews, postmortems, and retrospectives are important and seldom performed.
Reviews are both technical and sociological, and both factors must be accommodated.
Maintenance typically consumes 40 to 80 percent of software costs. It is probably the most important software lifecycle phase.
Enhancements represent roughly 60 percent of maintenance costs.
Maintenance is a solution– not a problem.
Understanding the existing product is the most difficult maintenance task.
Better methods lead to more maintenance, not less.
Quality is a collection of attributes.
Quality is not user satisfaction, meeting requirements, achieving cost and schedule, or reliability.
There are errors that most programmers tend to make.
Errors tend to cluster.
There is no single best approach to software error removal.
Residual errors will always persist. The goal should be to minimize or eliminate severe errors.
Efficiency stems more from good design than good coding.
High-order language code can be about 90 percent as efficient as comparable assembler code.
There are tradeoffs between optimizing for time and optimizing for space.
Many researchers advocate rather than investigate.
And the list of fallacies:
You can’t manage what you can’t measure.
You can manage quality into a software product.
Programming can and should be egoless.
Tools and Techniques
Tools and techniques: one size fits all.
Software needs more methodologies.
To estimate cost and schedule, first estimate lines of code.
Random test input is a good way to optimize testing.
“Given enough eyeballs, all bugs are shallow”.
The way to predict future maintenance costs and to make product replacement decisions is to look at past cost data.
You teach people how to program by showing them how to write programs.
You can see story point more and more famous as a kind of estimation (and scheduling technique). It intends to simplify estimation and scheduling but in practice it makes life more and more difficult.
What is the story point?
Let’s see some definitions:
“Story point is a arbitrary measure used by Scrum teams. This is used to measure the effort required to implement a story. In simple terms its a number that tells the team how hard the story is. Hard could be related to complexity, Unknowns and effort.” – agilefaq
“A story point is to program code what a kilogram is to sand or a kilometer is to distance: An arbitrary unit of measure which describes how heavy, far, big or complex something is.” – Explaining Story Points to Management (Otto: Soooo bad…., see later, mixing up SP and velocity and their relationship…)
” The number of use case points in a project is a function of the following:
- the number and complexity of the use cases in the system
- the number and complexity of the actors on the system
- various non-functional requirements (such as portability, performance, maintainability) that are not written as use cases
- the environment in which the project will be developed (such as the language, the team’s motivation, and so on) “ – Estimating With Use Case Points (Otto: Quite a lot of not so related aspect of development…)
And many many more….
It has a huge literature. One of the always quoted reference is Mike Cohn’s Agile estimating and planning. It is not only about story point but it is clear that Cohn preferes story point more. (On the other hand the book itself is great, a must read.) (De facto “main reference point” about story points.)
In out “everyday” it come up in the context of Scrum I have participated in several Scrum introduction and I have indirect (but close enough to be reliable) information from other. Without any exception it introduced the usage of story points.
BUT: Scrum is not about story points. Not at all. What is more! Not even talking about estimation. Nothing. When I am saying Scrum I am talking about the Scrum as it is. Of course when you are talking about Scrum it is always the original Scrum PLUS many-many additional tools and techniques (including story points).
Once I had a talk with some manager using Scrum in his organization. He told me that one of their current challenge (using Scrum for many years and still having this problem. hmmm…. - on the other hand they have introduced a very agile process which is great) is to make business people understanding the concept of story points. I think it is a mistake to explain it to a businessman who is interested in time and schedule…
In one of his presentation Dan North highlighted why story point is insane (around 24:40 - not exact transcript):
- (businessman) How much will it cost and when can I have it?
- (agile guy) We don’t know. We are agile.
- What?… How much will it cost and when can I have it? These are not hard questions.
- OK. We have done some work and we think that it gonna be 295 stories..
- And about 1000 story points.
- What is the story point?
- We don’t know yet.
- You are absolutely kidding me.
- We gonna run for few weeks and we gonna burn up and burn down and the velocity and hablalal…
- Stop! Stop right now and get out of my building.
(Humorous with lots of truth!)
As I see (and it cause me many problem when dealing with story points). It tries to mix up many unrelated estimation characteristics into one single number:
- complexity: more complex job, challenge of new technologies are higher in story points. But later the same level of story become less because of gained experience. Although the same kind of taks should have the same story point.
- work volume: more work should be bigger, obvious.
- team experience: less experienced team in certain area is giving more story point then experienced team. (as story point is sensitive to team composition it becomes clear by time: In the beginning the same task has higher story point then later)
- individual developer experience: better developer can be more than 10X more productive and faster then a dump. And as all team is composed by different qualities of developer…. A single number is hiding capabilities and risk of different developer qualities.
As these risks are so independent from each other you have to use different strategies and techniques to manage them. But if you are hiding these aspect behind a single figure you do not even have a chance.
Plus it has so many other weakness:
- any kind of story point estimate (and its brother velocity ) is unique to the team. Uncomparable. How to deal with multiple team project portfolio? It has to be transformed to a common measurement and measurement unit (most probably calendar days)
- extremely sensitive to team composition change. Someone got sick? A developer has an urgent support work in other project? Or just got a new team member? Or someone quits? (statistically it is quite probable in any project takes more than half a year)
- not giving any information without velocity . Obvious. After having velocity you could estimate due dates (which is a calendar unit!!!!!) and costs (which is calendar unit and money!!!!).
- there is no strong correlation amongst story point and real effort spent. The only thing can be sure that higher story point is more then lower. But a 4 sp is not 2 times more then 2 sp! (disclaimer: correlation exists but not strong except on small stories and small story points - but it leads to #noestimate subject)
So if story points has so many issues why are we using it?
Instead of story points
Instead of story points you should use calendar based estimates with proper risk management and (semi-)automatic estimation adjustment.
But how? It is a subject of another article (later).
Original: 2009 on http://takacsot.blog.hu
– en –
Must read book.
I had two sources which made me interested in the book. The first is a presentation in Youtube. The second is a book about Scrum. Both of sources was persuasive enough to read the book. So I did. An I was not disappointed.
There are lots of subject and experience described which is good not to expereince yourself. You could nmake a much faster Scrum introduction if you know what is described in the book.
– hu –
Két forrás keltette fel a kiváncsiságomat ez iránt a könyv iránt. Az egyik egy előadás, amit a Youtube-on láttam. És egy a Scrum-ról szóló könyvel kapcsolatban. Mind a két forrás meggyőző volt. Így hát neki estem. És nem csalódtam. Kiválló könyv. Sok olyan témával és tapasztalattal, amit jobb, ha az ember nem saját maga tapasztal ki, ha nem muszály. Sokkal hamarabb lehet bevezetni az agile fejlesztési módzsereket, ha tisztában vagyunk a könyvben leírtakkal.
Teljesen biztos vagyok abban, hogy még párszor el fogom olvasni az idén.
Why I hate hibernate
It is series of articles highlighting difficult or even impossible to resolve issues with Hibernate (in Grails). And his is right.
I summarize my experience like this:
For simple things Hibernate is very good but all other mapping tools are good too. Bus as soon as you reach a certain level you have to lick Hibernate’s ass. It is not serving you anymore but you are serving it.
- I don’t like Hibernate (and Grails), PART 1
- I don’t like Hibernate/Grails, part 2, repeatable finder problem: trust in nothing!
- I don’t like Grails/Hibernate part 3. DuplicateKeyException: Catch it if you can.
- I don’t like Grails/Hibernate, part 4. Hibernate proxy objects.
- I don’t like Hibernate/Grails part 5: auto-saving and auto-flushing
- I don’t like Hibernate/Grails part 6, how to save objects using refresh()
- I don’t like Hibernate/Grails part 7: working on more complex project
- I don’t like Hibernate/Grails, part 8, but some like Hibernate and Grails. Why?
- I don’t like Hibernate/Grails part 9: Testable code
- I don’t like Hibernate/Grails part 10: Repeatable finder, lessons learned
- I don’t like Hibernate/Grails part 11. Final thoughts.