I spend a lot of time fixing issues in my job. Software engineering is about creating solutions, yes, but it’s also about diagnosing and fixing problems. Some problems are run of the mill. You see the symptoms, you hear the complaints, you look at the context and it all points to one thing. A busted database, user error, cruddy printer heads.
Then there are those bugs that crop up and have no immediate solution. They are reported as ‘it sometimes crashes’ or ‘it intermittently goes slowly’. These are the ones to watch out for. It’s too easy to fob them off as ‘ah, user error’ or ‘an anomaly, nothing more’.
The report itself is full of assumptions, and these assumptions must be solidified. What was the user doing at the time? When does it normally occur? Is it the same terminal each time? Is it happening at other sites? In fact, there are so many possibilities of what it ‘could’ be, that it’s seemingly impossible to see what it really is. Worst of all, the assumption that the user has isolated the incident to where it broke is a terrible one. More often, the user has a prejudiced idea about what the nature of the problem truly is.
Jumping in to solve a problem at this point is prone to error, and one can easily find oneself chasing ghosts about, grasping at elusive problems in all the wrong places.
And that’s where being methodical comes in. If there is one rule I’ve found when busting bugs, it’s to go back to the very, very start. Take all of the current context with a grain (or spoonful) of salt. Stop, relax, take a breath and look at it from afar. Is the machine turned on? Is the application running? What version is it on? Is there network access?
All of these things, and more, can be assumed. Without testing, they cannot be taken for granted. It usually only takes a second to verify these basic things, and, from there, move onto the more complex issues. Funny thing, I would say that over half of the problems I face stem from something very simple, and half of the rest stem from something only marginally less simple.
And so it goes on, getting more and more complex until there’s nothing left. You find the code looks clean, the tests are working fine, and even when you artificially break the code to recreate the issue, it won’t fail.
And then it gets into hunch territory.
What’s the hunch? Your best and worst friend, the guy who pops up at the wrong time and gives you the right answer, but then talks utter crap for the rest of the week. The dude who hasn’t got the slightest clue why but knows for sure that the problem is a threading issue introduced by a third party integration. The hunch leads you up the garden path for a day, or gives you the answer in a sip of coffee.
The perfect thing about hunches is that we can often test them, test the assumptions, test the outcome. Sure, it can be wildly incorrect, and that’s where the mixing of the two mindsets comes into play: Be methodical, and rule out the obvious, then entertain your hunches by testing their claims, seeing if there’s any validity.
It is the same with Cooper Alley Ghost. The protagonist has had a bellyful of rigorous scientific methodology, and has been trained to ignore his feelings, what the nagging, unreasonable back of his mind is telling him. Until now.
Milena shows us that there is more to this world than the explainable, that so much is going on about us for we cannot account, that we cannot understand. We cannot put it all into a single sentence to explain it and we need our hunches, our guts, our feelings, to guide us.
The Professor is not so blinded as to dismiss feelings from his own personal convictions. Rather, we find that it is incumbent upon him, as a member of the scientific community, to maintain his rigorous methodology, or suffer the consequences of ridicule among his peers.