## Machine Learning Cheat Sheet (for scikit-learn)

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

As you hopefully have heard, we at scikit-learn are doing a user survey (which is still open by the way).
One of the requests there was to provide some sort of flow chart on how to do machine learning.

As this is clearly impossible, I went to work straight away.

This is the result:

Needless to say, this sheet is completely authoritative.

Thanks to Rob Zinkov for pointing out an error in one yes/no decision.

More seriously: this is actually my work flow / train of thoughts whenever I try to solve a new problem. Basically, start simple first. If this doesn’t work out, try something more complicated.
The chart above includes the intersection of all algorithms that are in scikit-learn and the ones that I find most useful in practice.

Only that I always start out with “just looking”. To make any of the algorithms actually work, you need to do the right preprocessing of your data – which is much more of an art than picking the right algorithm imho.

Anyhow, enjoy 😉

Filed under Auto

## Worst. Bug. Ever.

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

Some bugs are the worst because they cost money. Some because they cost lives.

Others would cite bugs buried deep in a framework or hardware as “the worst”.

For me, the worst kind of bugs are those were the solution, in hindsight, seemed so obvious. You end up more frustrated with the bug after knowing the fix.

I encountered my worst bug during a summer internship after my sophomore year of school. I was helping a research team at Purdue write simulation tools for nanophotonics — I say this not to sound like I was some kind of genius, but to highlight that I was in over my head in a very unfamiliar domain.

A group of research scientists and grad students would work out the math needed to simulate the performance of different nano-scale lenses and I was responsible for wrapping the computations in a web interface and plotting the results.

The team had an existing set of MATLAB scripts that they used internally, but these scripts were hard to modify and distribute. But, on the bright side, I could hook into the MATLAB scripts and leverage their existing work.

When I finally got everything wired up and started comparing the results of a few test cases, they didn’t match. I did my best to debug the MATLAB script, but the math was outside of my comprehension (optics theorems, higher order integrals, and complex numbers). And when I ran the simulation with the same inputs in the stand-alone script, I would get the correct results. Hmm.

The web interface was built on a proprietary framework — it could leverage an entire grid computing cluster as the backend, but wasn’t exactly something that StackOverflow could help with.

After about of week of stepping through the code line by line (even verifying some of the calculations by hand), I finally isolated the section of code where the results diverged.

for i=1:length(LensLayers)
d[i] = compute_diffraction_at_wavelength(LensLayers[i], WAVELENGTH)
end


It seemed pretty innocuous; loop over an array, perform a calculation on each element, store the result in another array.

Do you see the bug?

Remember when I said there were some PhD-level computations being done? Most of them dealt with complex numbers, which are natively support in MATLAB like so:

x = 2 + 3*i


Figure it out yet?

I was using i as my loop index, but as a side-effect the imaginary constant i in MATLAB was getting overwritten! So 2 + 3*i was evaluating to 5 for the first iteration, 8 for the second, etc. Sigh.

Changing the loop variable name immediately fixed the problem and the results became correct (an alternate solution is to use 3i instead of 3*i).

To this day, I’ve never run across another bug with such a frustratingly obvious solution.

It may have taken three weeks to solve, but at least I got a good “Worst. Bug. Ever.” story out of it.

Filed under Auto

## HackerRank Will Host Back To School Hackathon, Bringing College Students To Hot Startups

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

HackerRank has hosted college-focused hackathons before, but on February 2, it plans to connect some of the top coding talent in universities with some of the best-known companies in Silicon Valley.

Developed by the same company behind InterviewStreet, a site where companies find programmers by hosting “CodeSprints,” the HackerRank service launched last fall at the TechCrunch Disrupt conference. Co-founder Vivek Ravisankar said the goal is to create a community where hackers can complete programming challenges and see how they stack up against others. Unlike Coursera and Udacity, HackerRank is less focused on teaching you the basics of programming and more on letting coders practice their skills, he said.

For now, Ravisankar said that InterviewStreet is the company’s moneymaker, while at HackerRank he’s just trying to “build the user base and a very sticky platform.” Ultimately, he plans to make money by connecting programmers with companies they want to work for, but he said that will be a purely opt-in system.

As for the upcoming Back to School Challenge, Ravisankar said he has realized that college students, especially those who don’t go to a school in the San Francisco Bay Area, don’t really know much about Silicon Valley. The contest’s main prize is supposed to address that. The top 10 competitors will receive an all-expenses-paid trip to Silicon Valley, where HackerRank has organized tours at a number of companies, including Quora, Counsyl, PocketGems, OpenTable, RocketFuel, Weebly, Scribd, Pinterest, and Twitter. There are other prizes — the top prize includes \$2,000, a meeting with a partner at Y Combinator, and office hours with the HackerRank founders.

The contest will take place over 24 hours and consist of five challenges, with the first one focused on artificial intelligence. Ravisankar said he’ll be doing outreach at more than 30 schools, including Stanford, Berkeley, and Purdue, but any college student can participate — you just need to have a .edu email address.

Ravisankar said he’s hoping to host these types of Back to School challenges three or four times every year. You can read more and sign up here.

Filed under Auto

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

I’m a student, so as such, it should surprise no one to learn I spend a lot of my week in classrooms. I have to sit next to fellow students who give varying degrees of a damn about the class they’re in. Most of them look fairly normal, they appear to care enough to try something new, and all of them sound pretty intelligent. You’ll actually find if you sit down and talk to anyone for a while that whoever you talk to probably comes across as intelligent, but I digress.

The one thing most of these students have in common is their inability to put away their cellphone. These days, most students carry a smartphone, but the choice seems to affect little. The proclivity to whip out a phone mid class to text friends or browse Facebook appears the same among smartphone and feature phone users. This distracts me and anyone else not staring at a phone every waking moment.1 It disrespects the classroom and the idea of learning something when one believes a response to Eric’s message, “hey hw u doin wanna hang out 2nite,” takes precedence over a 50 minute to hour and a half class.

I study English, so many of my classes involve workshops — we focus on helping each other, fostering a small community in a classroom. To disrupt it with the constant vibration of a phone and one’s noticeable shuffle to grab the phone inside the backpack, conveniently laid on the desk in front of him or her to hide the phone, shows a sad lack of care for that community. In writing courses, most of us hope to become better writers. We wouldn’t take courses with such loose guidelines otherwise, though I grant some may take the workshop because they feel they can pass it easily.2 The same goes for other classes: the phone disturbs others, makes it difficult to focus on the task at hand, and makes the phone-obsessed difficult to work with.

I can’t say whether the phone harms students using it, nor if their grades suffer. Phones help in class too, so one shouldn’t ditch the little device. Folks can use them for plenty of good: looking up definitions, finding information the instructor or another student couldn’t recall, and other little situations. Smartphones can make one more productive with their easy access to information. Facebook does not. Texting does not. These students should show up, put their minds in the class mentally as well as physically, and respect the time others put into the class.

So if you find yourself ogling your phone in class, please stop yourself. Shut out your outside life in class and try to respect your classmates for the remainder of the class. We’d like to make it through without hoping a bus hit you on the way to class. One can’t avoid emergencies, but Brad’s wicked awesome keg stand can wait, much like his business degree.

Filed under Auto

## Fast Inverse Square Root

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

This post is about the magic constant 0x5f3759df and an extremely neat hack, fast inverse square root, which is where the constant comes from.

Meet the inverse square root hack:

float FastInvSqrt(float x) {
float xhalf = 0.5f * x;
int i = *(int*)&x;         // evil floating point bit level hacking
i = 0x5f3759df - (i >> 1);  // what the fuck?
x = *(float*)&i;
x = x*(1.5f-(xhalf*x*x));
return x;
}


What this code does is calculate, quickly, a good approximation for

$\frac{1}{\sqrt{x}}$

It’s a fairly well-known function these days and first became so when it appeared in the source of Quake III Arena in 2005. It was originally attributed to John Carmack but turned out to have a long history before Quake going back through SGI and 3dfx to Ardent Computer in the mid 80s to the original author Greg Walsh. The concrete code above is an adapted version of the Quake code (that’s where the comments are from).

This post has a bit of fun with this hack. It describes how it works, how to generalize it to any power between -1 and 1, and sheds some new light on the math involved.

(It does contain a fair bit of math. You can think of the equations as notes – you don’t have to read them to get the gist of the post but you should if you want the full story and/or verify for yourself that what I’m saying is correct).

## Why?

Why do you need to calculate the inverse of the square root – and need it so much that it’s worth implementing a crazy hack to make it fast? Because it’s part of a calculation you do all the time in 3D programming. In 3D graphics you use surface normals, 3-coordinate vectors of length 1, to express lighting and reflection. You use a lot of surface normals. And calculating them involves normalizing a lot of vectors. How do you normalize a vector? You find the length of the vector and then divide each of the coordinates with it. That is, you multiply each coordinate with

$\frac{1}{\sqrt{x^2+y^2+z^2}}$

Calculating $x^2+y^2+z^2$ is relatively cheap. Finding the square root and dividing by it is expensive. Enter FastInvSqrt.

## What does it do?

What does the function actually do to calculate its result? It has 4 main steps. First it reinterprets the bits of the floating-point input as an integer.

int i = *(int*)&x;         // evil floating point bit level hack


It takes the resulting value and does integer arithmetic on it which produces an approximation of the value we’re looking for:

i = 0x5f3759df - (i >> 1);  // what the fuck?


The result is not the approximation itself though, it is an integer which happens to be, if you reinterpret the bits as a floating point number, the approximation. So the code does the reverse of the conversion in step 1 to get back to floating point:

x = *(float*)&i;


And finally it runs a single iteration of Newton’s method to improve the approximation.

x = x*(1.5f-(xhalf*x*x));


This gives you, very quickly, an excellent approximation of the inverse square root of x. The last part, running Newton’s method, is relatively straightforward so I won’t spend more time on it. The key step is step 2: doing arithmetic on the raw floating-point number cast to an integer and getting a meaningful result back. That’s the part I’ll focus on.

## What the fuck?

This section explains the math behind step 2. (The first part of the derivation below, up to the point of calculating the value of the constant, appears to have first been found by McEniry).

Before we can get to the juicy part I’ll just quickly run over how standard floating-point numbers are encoded. I’ll just go through the parts I need, for the full background wikipedia is your friend. A floating-point number has three parts: the sign, the exponent, and the mantissa. Here’s the bits of a single-precision (32-bit) one:

s e e e e e e e e m m m m m m m m m m m m m m m m m m m m m m m


The sign is the top bit, the exponent is the next 8 and the mantissa bottom 27. Since we’re going to be calculating the square root which is only defined for positive values I’m going to be assuming the sign is 0 from now on.

When viewing a floating-point number as just a bunch of bits the exponent and mantissa are just plain positive integers, nothing special about them. Let’s call them E and M (since we’ll be using them a lot). On the other hand, when we interpret the bits as a floating-point value we’ll view the mantissa as a value between 0 and 1, so all 0s means 0 and all 1s is a value very close to but slightly less than 1. And rather than use the exponent as a 8-bit unsigned integer we’ll subtract a bias, B, to make it a signed integer between -127 and 128. Let’s call the floating-point interpretation of those values e and m. In general I’ll use upper-case letters for values that relate to the integer view and and lower-case for values that relate to the floating-point view.

Converting between the two views is straightforward:

$m = \frac{M}{L}$

$e = E - B$

For 32-bit floats L is 223 and B is 127. Given the values of e and m you calculate the floating-point number’s value like this:

$(1+m)2^e$

and the value of the corresponding integer interpretation of the number is

$M + LE$

Now we have almost all the bits and pieces I need to explain the hack. The value we want to calculate, given some input x, is the inverse square root or

$y = \frac{1}{\sqrt{x}} = x^{-\frac 12}$

For reasons that will soon become clear we’ll start off by taking the base 2 logarithm on both sides:

$\log_2 y = {-\frac 12}\log_2 x$

Since the values we’re working with are actually floating-point we can replace x and y with their floating-point components:

$\log_2 (1+m_y) + e_y = {-\frac 12}(\log_2 (1+m_x) + e_x)$

Ugh, logarithms. They’re such a hassle. Luckily we can get rid of them quite easily but first we’ll have to take a short break and talk about how they work.

On both sides of this equation we have a term that looks like this,

$\log_2(1 + v)$

where v is between 0 and 1. It just so happens that for v between 0 and 1, this function is pretty close to a straight line:

Or, in equation form:

$\log_2(1 + v) \approx v + \sigma$

Where σ is a constant we choose. It’s not a perfect match but we can adjust σ to make it pretty close. Using this we can turn the exact equation above that involved logarithms into an approximate one that is linear, which is much easier to work with:

$m_y + \sigma + e_y \approx {-\frac 12}(m_x + \sigma + e_x)$

Now we’re getting somewhere! At this point it’s convenient to stop working with the floating-point representation and use the definitions above to substitute the integer view of the exponent and mantissa:

$\frac{M_y}{L} + \sigma + E_y - B \approx {-\frac 12}(\frac{M_x}{L} + \sigma + E_x - B)$

If we shuffle these terms around a few steps we’ll get something that looks very familiar (the details are tedious, feel free to skip):

$\frac{M_y}{L} + E_y \approx {-\frac 12}(\frac{M_x}{L} + \sigma + E_x - B) - \sigma + B$

$\frac{M_y}{L} + E_y \approx {-\frac 12}(\frac{M_x}{L} + E_x) - \frac{3}{2}(\sigma + B)$

$M_y + LE_y \approx {\frac 32}L(B - \sigma) - {\frac 12}(M_x + LE_x)$

After this last step something interesting has happened: among the clutter we now have the value of the integer representations on either side of the equation:

$\mathbf{I_y} \approx {\frac 32}L(B - \sigma) - {\frac 12}\mathbf{I_x}$

In other words the integer representation of y is some constant minus half the integer representation of x. Or, in C code:

i = K - (i >> 1);


for some K. Looks very familiar right?

Now what remains is to find the constant. We already know what B and L are but we don’t have σ yet. Remember, σ is the adjustment we used to get the best approximation of the logarithm, so we have some freedom in picking it. I’ll pick the one that was used to produce the original implementation, 0.0450465. Using this value you get:

${\frac 23}L(B - \sigma) = {\frac 23}2^{23}(127 - 0.0450465) = 1597463007$

Want to guess what the hex representation of that value is? 0x5f3759df. (As it should be of course, since I picked σ to get that value.) So the constant is not a bit pattern as you might think from the fact that it’s written in hex, it’s the result of a normal calculation rounded to an integer.

But as Knuth would say: so far we’ve only proven that this should work, we haven’t tested it. To give a sense for how accurate the approximation is here is a plot of it along with the accurate inverse square root:

This is for values between 1 and 100. It’s pretty spot on right? And it should be – it’s not just magic, as the derivation above shows, it’s a computation that just happens to use the somewhat exotic but completely well-defined and meaningful operation of bit-casting between float and integer.

## But wait there’s more!

Looking at the derivation of this operation tells you something more than just the value of the constant though. You will notice that the derivation hardly depends on the concrete value of any of the terms – they’re just constants that get shuffled around. This means that if we change them the derivation still holds.

First off, the calculation doesn’t care what L and B are. They’re given by the floating-point representation. This means that we can do the same trick for 64- and 128-bit floating-point numbers if we want, all we have to do is recalculate the constant which it the only part that depends on them.

Secondly it doesn’t care which value we pick for σ. The σ that minimizes the difference between the logarithm and x+σ may not, and indeed does not, give the most accurate approximation. That’s a combination of floating-point rounding and because of the Newton step. Picking σ is an interesting subject in itself and is covered by McEniry and Lomont.

Finally, it doesn’t depend on -1/2. That is, the exponent here happens to be -1/2 but the derivation works just as well for any other exponent between -1 and 1. If we call the exponent (because e is taken) and do the same derivation with that instead of -1/2 we get:

$\mathbf{I_y} \approx (p - 1)L(\sigma - B) + p\mathbf{I_x}$

Let’s try a few exponents. First off p=0.5, the normal non-inverse square root:

$\mathbf{I_y} \approx K_{\frac 12} + {\frac 12}\mathbf{I_x}$

$K_{\frac 12} = {\frac 12}L(B - \sigma) = {\frac 12}2^{23}(127 - 0.0450465) = \mathtt{0x1fbd1df5}$

or in code form,

i = 0x1fbd1df5 + (i >> 1);


Does this work too? Sure does:

This may be a well-known method to approximate the square root but a cursory google and wikipedia search didn’t suggest that it was.

It even works with “odd” powers, like the inverse cube root

$\mathbf{I_y} \approx K_{\frac 13} + {\frac 13}\mathbf{I_x}$

$K_{\frac 13} = {\frac 43}L(B - \sigma) = {\frac 43}2^{23}(127 - 0.0450465) = \mathtt{0x2a517d3c}$

which corresponds to:

i = (int) (0x2a517d3c + (0.333f * i));


Since this is an odd factor we can’t use shift instead of multiplication. Again the approximation is very close:

At this point you may have noticed that when changing the exponent we’re actually doing something pretty simple: just adjusting the constant by a linear factor and changing the factor that is multiplied onto the integer representation of the input. These are not expensive operations so it’s feasible to do them at runtime rather than pre-compute them. If we pre-multiply just the two other factors:

$L(B - \sigma) = 2^{23}(127 - 0.0450465) = \mathtt{0x3f7a3bea}$

we can calculate the value without knowing the exponent in advance:

i = (p - 1) * 0x3f7a3bea + (p * i);


If you shuffle the terms around a bit you can even save one of multiplications:

i = p * (0x3f7a3bea + i) - 0x3f7a3bea;


This gives you the “magic” part of fast exponentiation for any exponent between -1 and 1; the one piece we now need to get a fast exponentiation function that works for all exponents and is as accurate as the original inverse square root function is to generalize the Newton approximation step. I haven’t looked into that so that’s for another blog post (most likely for someone other than me).

The expression above contains a new “magical” constant,  0x3f7a3bea. But even if it’s in some sense “more magical” than the original constant it depends on an arbitrary choice of σ so it’s not universal in any way. I’ll call it Cσ and we’ll take a closer look at it in a second.

But first, one sanity check we can try with this formula is when p=0. For a p of zero the result should always be 1 since x0 is 1 independent of x. And indeed the first term falls away because it is multiplied by 0 and so we get simply:

i = -0x3f7a3bea;


Which is indeed constant – and interpreted as a floating-point value it’s 0.977477 also known as “almost 1″ so the sanity check checks out. That tells us something else too: Cσ actually has a meaningful value when cast to a float. It’s 1; or very close to it (ignoring the sign bit).

That’s interesting. Let’s take a closer look. The integer representation of Cσ is

$C_\sigma = L(B - \sigma) = LB - L\sigma$

This is almost but not quite the shape of a floating-point number, the only problem is that we’re subtracting rather than adding the second term. That’s easy to fix though:

$LB - L\sigma = LB - L + L - L\sigma = L(B - 1) + L(1 - \sigma)$

Now it looks exactly like the integer representation of a floating-point number. To see which we’ll first determine the exponent and mantissa and then calculate the value, cσ. This is the exponent:

$e_{c_\sigma} = (E_{C_\sigma} - B) = (B - 1 - B) = -1$

and this is the mantissa:

$m_{c_\sigma} = \frac{M_{C_\sigma}}{L} = \frac{L(1 - \sigma)}{L} = 1 - \sigma$

So the floating-point value of the constant is (drumroll):

$c_\sigma = (1 + m_{c_\sigma})2^{e_{c_\sigma}} = \frac{1 + 1 - \sigma}2 = 1 - \frac{\sigma}2$

And indeed if you divide our original σ from earlier, 0.0450465, by 2 you get 0.02252325; subtract it from 1 you get 0.97747675 or our friend “almost 1″ from a moment ago. That gives us a second way to view Cσ, as the integer representation of a floating-point number, and to calculate it in code:

float sigma = 0.0450465;
float c_sigma = 1 - (0.5f * sigma);
int C_sigma = *(*int)&c_sigma;


Note that for a fixed σ these are all constants and the compiler should be able to optimize this whole computation away. The result is 0x3f7a3beb – not exactly 0x3f7a3bea from before but just one bit away (the least significant one) which is to be expected for computations that involve floating-point numbers. Getting to the original constant, the title of this post, is a matter of multiplying the result by 1.5.

With that we’ve gotten close enough to the bottom to satisfy at least me that there is nothing magical going on here. For me the main lesson from this exercise is that bit-casting between integers and floats is not just a meaningless operation, it’s an exotic but very cheap numeric operation that can be useful in computations. And I expect there’s more uses of it out there waiting to be discovered.

via Hacker News http://blog.quenta.org/2012/09/0x5f3759df.html

Filed under Auto

## Rewind: How it all started for Del Bosque

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

September 14th, 2012

Today, Vicente Del Bosque González  is the man; the epitome of success in football coaching.  The legend. The man whose cv is coveted by all other managers, with two UEFA Champions League titles, A World Cup, and a European Championship. He has them all, all three of the most prestigious competitions in football, an unprecedented achievement.

It has not always been this rosy. Throughout his career, he has been doubted, ridiculed, villified and undermined, often based more on his personality rather than his concrete achievements . At Real Madrid for instance, he was undermined and accused of being inept, and having the galacticos doing his work for them. He was also accused of being too soft spoken,‘safe’and diplomatic, always shying away from confrontations with his charges as well as media polemics. With Spain, people have suggested that he inherited Luis Aragones’s dimunitive tiki taka wizards (as well as enjoying a beneficial continuity of Barca’s philosophy at the national level), therefore having very little to do.

There has always been an auror of pessimism anywhere he’s been, despite always delivering. Maybe it is because he does not have the phenotype of ‘the media’s favourite’ – because he never attracts controversy or looks like the monolithic figure his that the high profile positions he has occupied is used to. Maybe his efforts – like keeping a winning team in winning mode, or achieving with a star studded side – have not been the sort of efforts that is on the surface, easily seen and praised. Maybe his hardwork has always been eclipsed by certain circumstances through no fault of his.

Due to all this perhaps, despite his stunning achievements, the calm, unassuming Salamanca born manager of the Spanish National team – a team already heralded as the greatest ever –  hardly ever receives the kind of media spotlight that, say, Guardiola or Mourinho receive today.

But that is not, and has never been, a problem for the famously moustachioed 61 year old. In fact, he refers it that way, he loves the quiet away from the media lens. And he could not care less about being criminally downplayed and underrated. His immense success speaks for itself.

But how did it all begin for him? Well, his journey towards the pinnacle of success began in 1999, with an unusual first season.  A first season that captured his familiarity with the concept of the underdog, and of achieving against the odds. A first season I’m sure, he’ll always look back on with nostalgia.

Humble beginnings

During his playing days, he was a midfielder. His most notable period was with the club dear to his heart – Real Madrid. He played in Madrid for 14 years, between 1970 and 1984, winning 5 La Ligas and 4 Copa Del Reys. After that spell he worked diligently behind the scenes for almost 16 years, during which he coached the Real Madrid B side, and at times handled the first team on an interim bases during times that there were no substantive managers(11 matches in 1994 and 1 match in 1996)

The man, once described in a 2003 BBC article as being ”as cool as a cryogenically frozen cocumber”,  never rushed. He was patient, working hard and taking all his chances as and when they came. He knew he would one day eventually end up in the manager’s seat at the Bernabeu on a full-time bases. Managers came and left, and humble Del Bosque was remained behind the scenes, learning, waiting.

Breakthrough

And then it came. His time. His opportunity. On the 17th day of November 1999. The board at Real led by Lorenzo Sanz – after having problems with manager John Toshack and his non performance – felt it was time to shake things up on the technical bench, and finally time to give Del Bosque his chance. Real Madrid had been managed by a staggering 7 managers in three years. The club sought some sort of stability. There was a need to secure the services of an astute trainer for the long term. Debts were also piling up. There was the need for success. The board turned to modest Del Bosque , and he did not turn them down. He officially assumed the most popular hot seat in football on the 18th day of November, 1999.

It wasn’t exactly a high profile appointment. He wasn’t the most popular of candidates. But the board felt they had to try something new. Just like how Barcelona recruited Guardiola or Inter did Strammacioni. He had not been a manager at the top level for a full season before. Experience did not favour him. It was basically a gamble. But Del Bosque had been working with the club for almost all of his life. He knew the club well, he loved it. Above all, he was hardworking.

He had a tough job to do. John Toshack had drawn and lost most of the league games at to that point, and the team was sitting 8th on the table. There was also the Champions league, and qualification to the next round from the second group stage (Toshack had already qualified the team from the first group stage). And there was the Copa Del Rey too. The task was ginormous, and the then 48 year old Del Bosque had been thrown in at the deep end. Even though he was a faithful Madridista through and through, there was no way he was going to evade the sack if he messed up. Politics at Real meant Lorenzo Sans was virtually betting his presidential future on Del Bosque. It was more or less make or break.

He got to work in earnest, trying to juggle the demands of all three competitions and their accompanying expectations. But he held his own, remained focused, and sought to deliver.

The rookie’s success

Del Bosque finished the 1999/00 La Liga season in fifth place – a position which would have been normally disastrous for a club like Real Madrid – but it was not.

Why? They achieved a points tally of 62, only 7 points behind champions Deportivo La Coruna, impressive, considering how bad they started the season. Also, 5th position then, meant Champions League qualification – which in fact they found out they wouldn’t need, because…..

……they went on to win the Champions League itself, beating fellow Spanish club convincingly in the final, with a 3-0 win. This was after qualifying narrowly from the second group phase(above third placed Dynamo Kyiv via head to head), and subsequently flooring their quarter and semi final opponents.

It became their second triumph in four seasons. Interestingly, Del Bosque also reached the semi final of the Copa Del Rey, only losing to eventual winners Espanyol. The man who took over in medes res, amidst poor performances and instability, united the club, raised their game, and went on to secure the biggest trophy in club football. And this was all done in his first full season in his top level management career. This was, also done at the biggest, most successful club in the history of football, where the pressure is unimaginable.

The first chapter of a remarkable success story had been written.

Don Vicente went on to win 6 more trophies in his next three seasons at the helm, including another European Cup in 2002 as well as two La Liga titles, in what became the club’s second most successful era.

### Author Info

#### Fiifi Anaman

Fiifi Anaman is a young freelance football writer from Ghana. Writes for Goal.com Ghana, Full-TimeWhistle.com amongst other outlets. Occasionally talks about football on radio.

This entry was posted by is filed under Featured, La Liga and Tags: , , . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Filed under Auto

## Self-Taught Developers: Are You Missing Your Foundation?

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

About a month ago, I wrote about software development being an art, which got me thinking about the importance of practicing alongside experienced programmers as part of the education process. A Computer Science curriculum provides important background science and strips away the layers of perceived computer “magic,” while an apprenticeship hones the practice and application of the science.

But, what if you’re missing the first part of the equation? Biologists, physicists, small business owners, and people with interesting ideas everywhere are seeing ways that software could help them and are acting to build it themselves — and that’s great! I think we’ll eventually see software development taught in many non-technical programs, just as we do with reading and writing English.

In the meantime, if you find yourself in that boat, here are a few basic things every developer should know about the science of computers and software.

1. Data Structures

Using the right data structure for the job will save you a lot of headaches. Not only is an associative array more clear in intent than two arrays with related values, you’ll write less code and (chances are) it will perform better because you’re using the data structure for its intended purpose.

Work to gain an understanding of the strengths and weaknesses of even a few relatively basic data structures and you’ll be much more prepared to deal with the varied challenges that come your way. I recommend at least the following:

There’s an incredible depth to this topic, but that should be a great start for many practical programming needs.

2. Boolean Logic

Boolean logic is an essential topic, and one that’s easy to dismiss because it seems relatively simple. But its apparent simplicity can also be a stumbling block when you, inevitably, make a mistake. Who hasn’t messed up the negation of (a && b) at least once?

Having a solid grasp of at least basic boolean logic makes it easier to spot those embarrasing mistakes and confidently flip statements around so they’re easy to comprehend.

3. Algorithms

Don’t let the word algorithm put you off — it just means a series of steps you follow to accomplish a goal. By that definition, you’re creating an algorithm every time you write software, which is why this topic is so important. Learning how we’ve solved well-understood problems in software along with their respective performance characteristics will help you better understand the performance and complexity of your own software.

These are a few areas in the vast realm of algorithms I recommend starting with:

• Recursive vs. Iterative Algorithms
• Big-O notation for classifying performance
• Basic Sort & Search Algorithms

4. At Least a Little Set Theory

Relational databases are extremely prevalent — you’re probably working with one now or will be someday soon. Do you know what a cartesian product is? Or that the results of that SELECT statement you’re writing are a set projection? You should.

Projection, union, intersection, complement, and cartesian product are all examples of set theory operations you’ll encounter writing SQL statements against a relational database. Sean Mehan has written a nice overview of Set Theory and SQL Concepts that would be a good place to start.

What’s Missing?

What else should be considered part of the bare necessities of a working knowledge of computer science for the purposes of software development?

Don’t forget to share!

Reference: Self-Taught Developers: Are You Missing Your Foundation? from our JCG partner Lisa Tjapkes at the Atomic Spin blog.