Write highly decoupled Webapps using Twitter Flight

Twitter recently open-sourced their webapp framework called Flight. I was starting on this webapp, and thought that I would give it a go. I have only worked on it for the last two-three weeks; it has left a very lasting impression on me though.

A different take from conventional MV* frameworks

The primary idea behind Flight is to divide the app into a number of “components”, which will interact with each other through events, and ideally, should be able to function independently. Flight also offers “mixins”, which can extend a component by adding more properties to it. The number of MVC/MV*  frameworks in JS has grown exponentially of late; Flight offers a very fresh way of looking at modern webapps. Steven Sanderson offers a very nice perspective on the various MV* frameworks around in his blog. Despite their differences, one common thing about all of these is that they impose their own structure on your code; some do this more than others. Flight is different in the sense that beyond the component definition, it gives a lot of flexibility to the user to use whatever he/she wishes to plug into the code without much of a pain. Flight is also inherently modular;  this leads to code getting very organized without any effort at all. It is also strongly in accordance with the DRY philosophy; Flight components can be attached to multiple DOM elements, Flight mixins can be added to multiple components, A single Flight component can have multiple mixins added to it.

How I used Flight

A little bit of a background about my app stack: It is a single page app. The various libraries I have used are:

  1. Require JS and AMD for loading modules:
  2. FlatIron Director for Routing: I wrote a hack over the original code to implement AMD
  3. Flight
  4. Bootstrap & a few of the Bootstrap Plugins
  5. JQuery
  6. Lo-Dash: Functionality of Underscore, but faster
  7. A bunch of JQuery plugins here and there
  8. Coffee Script and SASS

I call each single-page state as a “module”. Each module uses whatever components it needs. In the last two-three weeks, I have developed a solid structure through which I am using my components; I have been able to roll out code quicker than I have ever done. The file structure for each component looks like:

|–  model.js

|–  operations.js

|–  view.js

|–  templates

|– —  template1.txt

|– —  template2.txt

Here, model.js is the primary component file. operations.js and view.js are invoked as mixins into model.js. The idea behind this is to keep model.js clean; All my dirty work happens in operations.js (Various operations)  and view.js (All the DOM manipulation, templates are also loaded here). By defining further dependencies in operations.js and view.js, I can use any third part library that I wish to, and it will seamlessly integrate with my component. For example, I have not defined explicitly what engine I am going to use for templating; I could easily add whatever I am comfortable with without compromising on the overall structure.

Some Talking Points:

  • Do have a look at this account by Tom Hamshere on using Flight in refactoring TweetDeck. The post has a lot of insights on using Flight. I had a few nice discussions with Tom about Flight, and they have only strengthened my liking for Flight. 
  • Ean Schuessler talks about the “Models” in flight not being powerful enough. I gave this a long, hard thought. However, as Flight relies on events more than model data as such, data binding is completely. It is more of an event/DOM first approach; Similar to how Models are different in Angular as against  Backbone. In Backbone, the Views are derived from the Models; Here, it is the other way round.
  • Martin Gontovnikas also talks about the Model-View part as in the above point. He also mentions the lack of a Router. This, I feel is the essence of Flight really; It is not a full framework that will impose itself on you. It allows you to use whatever components you want to for the app; I, for instance have used FlatIron Director; I could have used Crossroads, or even the Backbone Router; that remains my call.
  • By dividing the app into a number of components, it makes it easier to test each component separately. Be careful about selecting your tester though; It should be able to support DOM events.
  • Flight is a relatively new framework, and there could probably be better ways to use it. If you have used Flight, and have some nice suggestions, do let me know. Also, I could share some of my code structure, if you would wish me to. Please feel free to give in feedback

3 Comments

Filed under Tech

Breaking down Amazon’s mega dropdown

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

The hover effects on Amazon’s big ‘ole “Shop by Department” mega dropdown are super fast. Look’it how quick each submenu fills in as your mouse moves down the list:

image

It’s instant. I got nerd sniped by this. Most dropdown menus have to include a bit of a delay when activating submenus. Here’s an old Khan Academy dropdown as an example:

image

See the delay? You need that, because otherwise when you try to move your mouse from the main menu to the submenu, the submenu will disappear out from under you like some sort of sick, unwinnable game of whack-a-mole. Enjoy this example from bootstrap’s dropdown menus:

image
I love bootstrap, don’t get it twisted. Just a good example of submenu frustration.

It’s easy to move the cursor from Amazon’s main dropdown to its submenus. You won’t run into the bootstrap bug. They get away with this by detecting the direction of the cursor’s path.

image
If the cursor moves into the blue triangle the currently displayed submenu will stay open for just a bit longer.

At every position of the cursor you can picture a triangle between the current mouse position and the upper and lower right corners of the dropdown menu. If the next mouse position is within that triangle, the user is probably moving their cursor into the currently displayed submenu. Amazon uses this for a nice effect. As long as the cursor stays within that blue triangle the current submenu will stay open. It doesn’t matter if the cursor hovers over “Appstore for Android” momentarily — the user is probably heading toward “Learn more about Cloud Drive.”

And if the cursor goes outside of the blue triangle, they instantly switch the submenu, giving it a really responsive feel.

So if you’re as geeky as me and think something this trivial is cool, I made a jQuery plugin that fires events when detecting this sort of directional menu aiming: jQuery-menu-aim. We’re using it in the new Khan Academy “Learn” menu:

image

I think it feels snappy. I’m not ashamed to copy Amazon. I’m sure this problem was solved years and years ago, forgotten, rediscovered, solved again, forgotten, rediscovered, solved again.

If anyone else on the planet ends up finding a use for jQuery-menu-aim, I’d be grateful to know what you think.


Thanks go to Ben Alpert for helping me understand the linear algebra / cross-product magic Amazon uses to detect movement inside the “blue triangle.” I ended up going w/ a cruder slope-based approach, mostly b/c I’ve lost all intuitive understanding of linear algebra. Sad. Need to watch more KA videos.

via Hacker News http://bjk5.com/post/44698559168/breaking-down-amazons-mega-dropdown

Leave a comment

Filed under Auto

“I Want Hue” – Colors for Data Scientists

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

See also our other tools at Médialab Tools!

And a huge thanks to these inspiring works:

Chroma.js

I massively use this excellent js library to convert colors. If you have not done it yet, look at this post. You’ll understand much useful things about color in dataviz.

ColorBrewer

Very famous tool, that showed the way few years ago. If you do not know it, you must take a look.

via Hacker News http://tools.medialab.sciences-po.fr/iwanthue/

Leave a comment

Filed under Auto

Turn your browser into a notepad with one line

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

Sometimes I just need to type garbage. Just to clear out my mind. Using editors to type such gibberish annoys me because it clutters my project workspace (I’m picky, I know).

So I do this. Since I live in the browser, I just open a new tab and type in the url tab.

data:text/html, <html contenteditable>

Voila, browser notepad.

You don’t need to remember it. It’s not rocket science. We are using the Data URI’s format and telling the browser to render an html (try “javascript:alert(‘Bazinga’);”). The content of said html is a simple html line with the html5 attribute contenteditable. This works only on modern browsers that understand this attribute. Click and type!

via Hacker News https://coderwall.com/p/lhsrcq

Leave a comment

Filed under Auto

Machine Learning Cheat Sheet (for scikit-learn)

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

As you hopefully have heard, we at scikit-learn are doing a user survey (which is still open by the way).
One of the requests there was to provide some sort of flow chart on how to do machine learning.

As this is clearly impossible, I went to work straight away.

This is the result:

Needless to say, this sheet is completely authoritative.

Thanks to Rob Zinkov for pointing out an error in one yes/no decision.

More seriously: this is actually my work flow / train of thoughts whenever I try to solve a new problem. Basically, start simple first. If this doesn’t work out, try something more complicated.
The chart above includes the intersection of all algorithms that are in scikit-learn and the ones that I find most useful in practice.

Only that I always start out with “just looking”. To make any of the algorithms actually work, you need to do the right preprocessing of your data – which is much more of an art than picking the right algorithm imho.

Anyhow, enjoy 😉

via Hacker News http://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html

Leave a comment

Filed under Auto

Worst. Bug. Ever.

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

Some bugs are the worst because they cost money. Some because they cost lives.

Others would cite bugs buried deep in a framework or hardware as “the worst”.

For me, the worst kind of bugs are those were the solution, in hindsight, seemed so obvious. You end up more frustrated with the bug after knowing the fix.


I encountered my worst bug during a summer internship after my sophomore year of school. I was helping a research team at Purdue write simulation tools for nanophotonics — I say this not to sound like I was some kind of genius, but to highlight that I was in over my head in a very unfamiliar domain.

A group of research scientists and grad students would work out the math needed to simulate the performance of different nano-scale lenses and I was responsible for wrapping the computations in a web interface and plotting the results.

The team had an existing set of MATLAB scripts that they used internally, but these scripts were hard to modify and distribute. But, on the bright side, I could hook into the MATLAB scripts and leverage their existing work.

When I finally got everything wired up and started comparing the results of a few test cases, they didn’t match. I did my best to debug the MATLAB script, but the math was outside of my comprehension (optics theorems, higher order integrals, and complex numbers). And when I ran the simulation with the same inputs in the stand-alone script, I would get the correct results. Hmm.

The web interface was built on a proprietary framework — it could leverage an entire grid computing cluster as the backend, but wasn’t exactly something that StackOverflow could help with.

After about of week of stepping through the code line by line (even verifying some of the calculations by hand), I finally isolated the section of code where the results diverged.

for i=1:length(LensLayers)
  d[i] = compute_diffraction_at_wavelength(LensLayers[i], WAVELENGTH)
end

It seemed pretty innocuous; loop over an array, perform a calculation on each element, store the result in another array.

Do you see the bug?

Remember when I said there were some PhD-level computations being done? Most of them dealt with complex numbers, which are natively support in MATLAB like so:

x = 2 + 3*i

Figure it out yet?

I was using i as my loop index, but as a side-effect the imaginary constant i in MATLAB was getting overwritten! So 2 + 3*i was evaluating to 5 for the first iteration, 8 for the second, etc. Sigh.

Changing the loop variable name immediately fixed the problem and the results became correct (an alternate solution is to use 3i instead of 3*i).


To this day, I’ve never run across another bug with such a frustratingly obvious solution.

It may have taken three weeks to solve, but at least I got a good “Worst. Bug. Ever.” story out of it.

via Hacker News http://swanson.github.com/blog/2013/01/20/worst-bug-ever.html

Leave a comment

Filed under Auto

HackerRank Will Host Back To School Hackathon, Bringing College Students To Hot Startups

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated. hackerrank logo

HackerRank has hosted college-focused hackathons before, but on February 2, it plans to connect some of the top coding talent in universities with some of the best-known companies in Silicon Valley.

Developed by the same company behind InterviewStreet, a site where companies find programmers by hosting “CodeSprints,” the HackerRank service launched last fall at the TechCrunch Disrupt conference. Co-founder Vivek Ravisankar said the goal is to create a community where hackers can complete programming challenges and see how they stack up against others. Unlike Coursera and Udacity, HackerRank is less focused on teaching you the basics of programming and more on letting coders practice their skills, he said.

For now, Ravisankar said that InterviewStreet is the company’s moneymaker, while at HackerRank he’s just trying to “build the user base and a very sticky platform.” Ultimately, he plans to make money by connecting programmers with companies they want to work for, but he said that will be a purely opt-in system.

As for the upcoming Back to School Challenge, Ravisankar said he has realized that college students, especially those who don’t go to a school in the San Francisco Bay Area, don’t really know much about Silicon Valley. The contest’s main prize is supposed to address that. The top 10 competitors will receive an all-expenses-paid trip to Silicon Valley, where HackerRank has organized tours at a number of companies, including Quora, Counsyl, PocketGems, OpenTable, RocketFuel, Weebly, Scribd, Pinterest, and Twitter. There are other prizes — the top prize includes $2,000, a meeting with a partner at Y Combinator, and office hours with the HackerRank founders.

The contest will take place over 24 hours and consist of five challenges, with the first one focused on artificial intelligence. Ravisankar said he’ll be doing outreach at more than 30 schools, including Stanford, Berkeley, and Purdue, but any college student can participate — you just need to have a .edu email address.

Ravisankar said he’s hoping to host these types of Back to School challenges three or four times every year. You can read more and sign up here.

via TechCrunch http://techcrunch.com/2013/01/07/hackerrank-back-to-school/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29

Leave a comment

Filed under Auto

The Cellphone Glued to Your Hand (authpad.com)

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

I’m a student, so as such, it should surprise no one to learn I spend a lot of my week in classrooms. I have to sit next to fellow students who give varying degrees of a damn about the class they’re in. Most of them look fairly normal, they appear to care enough to try something new, and all of them sound pretty intelligent. You’ll actually find if you sit down and talk to anyone for a while that whoever you talk to probably comes across as intelligent, but I digress.

The one thing most of these students have in common is their inability to put away their cellphone. These days, most students carry a smartphone, but the choice seems to affect little. The proclivity to whip out a phone mid class to text friends or browse Facebook appears the same among smartphone and feature phone users. This distracts me and anyone else not staring at a phone every waking moment.1 It disrespects the classroom and the idea of learning something when one believes a response to Eric’s message, “hey hw u doin wanna hang out 2nite,” takes precedence over a 50 minute to hour and a half class.

I study English, so many of my classes involve workshops — we focus on helping each other, fostering a small community in a classroom. To disrupt it with the constant vibration of a phone and one’s noticeable shuffle to grab the phone inside the backpack, conveniently laid on the desk in front of him or her to hide the phone, shows a sad lack of care for that community. In writing courses, most of us hope to become better writers. We wouldn’t take courses with such loose guidelines otherwise, though I grant some may take the workshop because they feel they can pass it easily.2 The same goes for other classes: the phone disturbs others, makes it difficult to focus on the task at hand, and makes the phone-obsessed difficult to work with.

I can’t say whether the phone harms students using it, nor if their grades suffer. Phones help in class too, so one shouldn’t ditch the little device. Folks can use them for plenty of good: looking up definitions, finding information the instructor or another student couldn’t recall, and other little situations. Smartphones can make one more productive with their easy access to information. Facebook does not. Texting does not. These students should show up, put their minds in the class mentally as well as physically, and respect the time others put into the class.

So if you find yourself ogling your phone in class, please stop yourself. Shut out your outside life in class and try to respect your classmates for the remainder of the class. We’d like to make it through without hoping a bus hit you on the way to class. One can’t avoid emergencies, but Brad’s wicked awesome keg stand can wait, much like his business degree.

via Hacker News 20 http://null.authpad.com/the-cellphone-glued-to-your-hand

Leave a comment

Filed under Auto

Fast Inverse Square Root

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

This post is about the magic constant 0x5f3759df and an extremely neat hack, fast inverse square root, which is where the constant comes from.

Meet the inverse square root hack:

float FastInvSqrt(float x) {
  float xhalf = 0.5f * x;
  int i = *(int*)&x;         // evil floating point bit level hacking
  i = 0x5f3759df - (i >> 1);  // what the fuck?
  x = *(float*)&i;
  x = x*(1.5f-(xhalf*x*x));
  return x;
}

What this code does is calculate, quickly, a good approximation for

\frac{1}{\sqrt{x}}

It’s a fairly well-known function these days and first became so when it appeared in the source of Quake III Arena in 2005. It was originally attributed to John Carmack but turned out to have a long history before Quake going back through SGI and 3dfx to Ardent Computer in the mid 80s to the original author Greg Walsh. The concrete code above is an adapted version of the Quake code (that’s where the comments are from).

This post has a bit of fun with this hack. It describes how it works, how to generalize it to any power between -1 and 1, and sheds some new light on the math involved.

(It does contain a fair bit of math. You can think of the equations as notes – you don’t have to read them to get the gist of the post but you should if you want the full story and/or verify for yourself that what I’m saying is correct).

Why?

Why do you need to calculate the inverse of the square root – and need it so much that it’s worth implementing a crazy hack to make it fast? Because it’s part of a calculation you do all the time in 3D programming. In 3D graphics you use surface normals, 3-coordinate vectors of length 1, to express lighting and reflection. You use a lot of surface normals. And calculating them involves normalizing a lot of vectors. How do you normalize a vector? You find the length of the vector and then divide each of the coordinates with it. That is, you multiply each coordinate with

\frac{1}{\sqrt{x^2+y^2+z^2}}

Calculating x^2+y^2+z^2 is relatively cheap. Finding the square root and dividing by it is expensive. Enter FastInvSqrt.

What does it do?

What does the function actually do to calculate its result? It has 4 main steps. First it reinterprets the bits of the floating-point input as an integer.

int i = *(int*)&x;         // evil floating point bit level hack

It takes the resulting value and does integer arithmetic on it which produces an approximation of the value we’re looking for:

i = 0x5f3759df - (i >> 1);  // what the fuck?

The result is not the approximation itself though, it is an integer which happens to be, if you reinterpret the bits as a floating point number, the approximation. So the code does the reverse of the conversion in step 1 to get back to floating point:

x = *(float*)&i;

And finally it runs a single iteration of Newton’s method to improve the approximation.

x = x*(1.5f-(xhalf*x*x));

This gives you, very quickly, an excellent approximation of the inverse square root of x. The last part, running Newton’s method, is relatively straightforward so I won’t spend more time on it. The key step is step 2: doing arithmetic on the raw floating-point number cast to an integer and getting a meaningful result back. That’s the part I’ll focus on.

What the fuck?

This section explains the math behind step 2. (The first part of the derivation below, up to the point of calculating the value of the constant, appears to have first been found by McEniry).

Before we can get to the juicy part I’ll just quickly run over how standard floating-point numbers are encoded. I’ll just go through the parts I need, for the full background wikipedia is your friend. A floating-point number has three parts: the sign, the exponent, and the mantissa. Here’s the bits of a single-precision (32-bit) one:

s e e e e e e e e m m m m m m m m m m m m m m m m m m m m m m m

The sign is the top bit, the exponent is the next 8 and the mantissa bottom 27. Since we’re going to be calculating the square root which is only defined for positive values I’m going to be assuming the sign is 0 from now on.

When viewing a floating-point number as just a bunch of bits the exponent and mantissa are just plain positive integers, nothing special about them. Let’s call them E and M (since we’ll be using them a lot). On the other hand, when we interpret the bits as a floating-point value we’ll view the mantissa as a value between 0 and 1, so all 0s means 0 and all 1s is a value very close to but slightly less than 1. And rather than use the exponent as a 8-bit unsigned integer we’ll subtract a bias, B, to make it a signed integer between -127 and 128. Let’s call the floating-point interpretation of those values e and m. In general I’ll use upper-case letters for values that relate to the integer view and and lower-case for values that relate to the floating-point view.

Converting between the two views is straightforward:

m = \frac{M}{L}

e = E - B

For 32-bit floats L is 223 and B is 127. Given the values of e and m you calculate the floating-point number’s value like this:

(1+m)2^e

and the value of the corresponding integer interpretation of the number is

M + LE

Now we have almost all the bits and pieces I need to explain the hack. The value we want to calculate, given some input x, is the inverse square root or

y = \frac{1}{\sqrt{x}} = x^{-\frac 12}

For reasons that will soon become clear we’ll start off by taking the base 2 logarithm on both sides:

\log_2 y = {-\frac 12}\log_2 x

Since the values we’re working with are actually floating-point we can replace x and y with their floating-point components:

\log_2 (1+m_y) + e_y = {-\frac 12}(\log_2 (1+m_x) + e_x)

Ugh, logarithms. They’re such a hassle. Luckily we can get rid of them quite easily but first we’ll have to take a short break and talk about how they work.

On both sides of this equation we have a term that looks like this,

\log_2(1 + v)

where v is between 0 and 1. It just so happens that for v between 0 and 1, this function is pretty close to a straight line:

ln2(1 + x) vs. x + sigma

Or, in equation form:

\log_2(1 + v) \approx v + \sigma

Where σ is a constant we choose. It’s not a perfect match but we can adjust σ to make it pretty close. Using this we can turn the exact equation above that involved logarithms into an approximate one that is linear, which is much easier to work with:

m_y + \sigma + e_y \approx {-\frac 12}(m_x + \sigma + e_x)

Now we’re getting somewhere! At this point it’s convenient to stop working with the floating-point representation and use the definitions above to substitute the integer view of the exponent and mantissa:

\frac{M_y}{L} + \sigma + E_y - B \approx {-\frac 12}(\frac{M_x}{L} + \sigma + E_x - B)

If we shuffle these terms around a few steps we’ll get something that looks very familiar (the details are tedious, feel free to skip):

\frac{M_y}{L} + E_y \approx {-\frac 12}(\frac{M_x}{L} + \sigma + E_x - B) - \sigma + B

\frac{M_y}{L} + E_y \approx {-\frac 12}(\frac{M_x}{L} + E_x) - \frac{3}{2}(\sigma + B)

M_y + LE_y \approx {\frac 32}L(B - \sigma) - {\frac 12}(M_x + LE_x)

After this last step something interesting has happened: among the clutter we now have the value of the integer representations on either side of the equation:

\mathbf{I_y} \approx {\frac 32}L(B - \sigma) - {\frac 12}\mathbf{I_x}

In other words the integer representation of y is some constant minus half the integer representation of x. Or, in C code:

i = K - (i >> 1);

for some K. Looks very familiar right?

Now what remains is to find the constant. We already know what B and L are but we don’t have σ yet. Remember, σ is the adjustment we used to get the best approximation of the logarithm, so we have some freedom in picking it. I’ll pick the one that was used to produce the original implementation, 0.0450465. Using this value you get:

{\frac 23}L(B - \sigma) = {\frac 23}2^{23}(127 - 0.0450465) = 1597463007

Want to guess what the hex representation of that value is? 0x5f3759df. (As it should be of course, since I picked σ to get that value.) So the constant is not a bit pattern as you might think from the fact that it’s written in hex, it’s the result of a normal calculation rounded to an integer.

But as Knuth would say: so far we’ve only proven that this should work, we haven’t tested it. To give a sense for how accurate the approximation is here is a plot of it along with the accurate inverse square root:

Graph of approximation vs. accurate value

This is for values between 1 and 100. It’s pretty spot on right? And it should be – it’s not just magic, as the derivation above shows, it’s a computation that just happens to use the somewhat exotic but completely well-defined and meaningful operation of bit-casting between float and integer.

But wait there’s more!

Looking at the derivation of this operation tells you something more than just the value of the constant though. You will notice that the derivation hardly depends on the concrete value of any of the terms – they’re just constants that get shuffled around. This means that if we change them the derivation still holds.

First off, the calculation doesn’t care what L and B are. They’re given by the floating-point representation. This means that we can do the same trick for 64- and 128-bit floating-point numbers if we want, all we have to do is recalculate the constant which it the only part that depends on them.

Secondly it doesn’t care which value we pick for σ. The σ that minimizes the difference between the logarithm and x+σ may not, and indeed does not, give the most accurate approximation. That’s a combination of floating-point rounding and because of the Newton step. Picking σ is an interesting subject in itself and is covered by McEniry and Lomont.

Finally, it doesn’t depend on -1/2. That is, the exponent here happens to be -1/2 but the derivation works just as well for any other exponent between -1 and 1. If we call the exponent (because e is taken) and do the same derivation with that instead of -1/2 we get:

\mathbf{I_y} \approx (p - 1)L(\sigma - B) + p\mathbf{I_x}

Let’s try a few exponents. First off p=0.5, the normal non-inverse square root:

\mathbf{I_y} \approx K_{\frac 12} + {\frac 12}\mathbf{I_x}

K_{\frac 12} = {\frac 12}L(B - \sigma) = {\frac 12}2^{23}(127 - 0.0450465) = \mathtt{0x1fbd1df5}

or in code form,

i = 0x1fbd1df5 + (i >> 1);

Does this work too? Sure does:

Graph of approximation vs. accurate

This may be a well-known method to approximate the square root but a cursory google and wikipedia search didn’t suggest that it was.

It even works with “odd” powers, like the inverse cube root

\mathbf{I_y} \approx K_{\frac 13} + {\frac 13}\mathbf{I_x}

K_{\frac 13} = {\frac 43}L(B - \sigma) = {\frac 43}2^{23}(127 - 0.0450465) = \mathtt{0x2a517d3c}

which corresponds to:

i = (int) (0x2a517d3c + (0.333f * i));

Since this is an odd factor we can’t use shift instead of multiplication. Again the approximation is very close:

Graph of approximation vs. accurate

At this point you may have noticed that when changing the exponent we’re actually doing something pretty simple: just adjusting the constant by a linear factor and changing the factor that is multiplied onto the integer representation of the input. These are not expensive operations so it’s feasible to do them at runtime rather than pre-compute them. If we pre-multiply just the two other factors:

L(B - \sigma) = 2^{23}(127 - 0.0450465) = \mathtt{0x3f7a3bea}

we can calculate the value without knowing the exponent in advance:

i = (p - 1) * 0x3f7a3bea + (p * i);

If you shuffle the terms around a bit you can even save one of multiplications:

i = p * (0x3f7a3bea + i) - 0x3f7a3bea;

This gives you the “magic” part of fast exponentiation for any exponent between -1 and 1; the one piece we now need to get a fast exponentiation function that works for all exponents and is as accurate as the original inverse square root function is to generalize the Newton approximation step. I haven’t looked into that so that’s for another blog post (most likely for someone other than me).

The expression above contains a new “magical” constant,  0x3f7a3bea. But even if it’s in some sense “more magical” than the original constant it depends on an arbitrary choice of σ so it’s not universal in any way. I’ll call it Cσ and we’ll take a closer look at it in a second.

But first, one sanity check we can try with this formula is when p=0. For a p of zero the result should always be 1 since x0 is 1 independent of x. And indeed the first term falls away because it is multiplied by 0 and so we get simply:

i = -0x3f7a3bea;

Which is indeed constant – and interpreted as a floating-point value it’s 0.977477 also known as “almost 1″ so the sanity check checks out. That tells us something else too: Cσ actually has a meaningful value when cast to a float. It’s 1; or very close to it (ignoring the sign bit).

That’s interesting. Let’s take a closer look. The integer representation of Cσ is

C_\sigma = L(B - \sigma) = LB - L\sigma

This is almost but not quite the shape of a floating-point number, the only problem is that we’re subtracting rather than adding the second term. That’s easy to fix though:

LB - L\sigma = LB - L + L - L\sigma = L(B - 1) + L(1 - \sigma)

Now it looks exactly like the integer representation of a floating-point number. To see which we’ll first determine the exponent and mantissa and then calculate the value, cσ. This is the exponent:

e_{c_\sigma} = (E_{C_\sigma} - B) = (B - 1 - B) = -1

and this is the mantissa:

m_{c_\sigma} = \frac{M_{C_\sigma}}{L} = \frac{L(1 - \sigma)}{L} = 1 - \sigma

So the floating-point value of the constant is (drumroll):

c_\sigma = (1 + m_{c_\sigma})2^{e_{c_\sigma}} = \frac{1 + 1 - \sigma}2 = 1 - \frac{\sigma}2

And indeed if you divide our original σ from earlier, 0.0450465, by 2 you get 0.02252325; subtract it from 1 you get 0.97747675 or our friend “almost 1″ from a moment ago. That gives us a second way to view Cσ, as the integer representation of a floating-point number, and to calculate it in code:

float sigma = 0.0450465;
float c_sigma = 1 - (0.5f * sigma);
int C_sigma = *(*int)&c_sigma;

Note that for a fixed σ these are all constants and the compiler should be able to optimize this whole computation away. The result is 0x3f7a3beb – not exactly 0x3f7a3bea from before but just one bit away (the least significant one) which is to be expected for computations that involve floating-point numbers. Getting to the original constant, the title of this post, is a matter of multiplying the result by 1.5.

With that we’ve gotten close enough to the bottom to satisfy at least me that there is nothing magical going on here. For me the main lesson from this exercise is that bit-casting between integers and floats is not just a meaningless operation, it’s an exotic but very cheap numeric operation that can be useful in computations. And I expect there’s more uses of it out there waiting to be discovered.

via Hacker News http://blog.quenta.org/2012/09/0x5f3759df.html

Leave a comment

Filed under Auto

Rewind: How it all started for Del Bosque

This post has been automatically generated. I use this blog to collect links that I have bookmarked. All activity is automated.

September 14th, 2012 by Fiifi Anaman

Today, Vicente Del Bosque González  is the man; the epitome of success in football coaching.  The legend. The man whose cv is coveted by all other managers, with two UEFA Champions League titles, A World Cup, and a European Championship. He has them all, all three of the most prestigious competitions in football, an unprecedented achievement.

It has not always been this rosy. Throughout his career, he has been doubted, ridiculed, villified and undermined, often based more on his personality rather than his concrete achievements . At Real Madrid for instance, he was undermined and accused of being inept, and having the galacticos doing his work for them. He was also accused of being too soft spoken,‘safe’and diplomatic, always shying away from confrontations with his charges as well as media polemics. With Spain, people have suggested that he inherited Luis Aragones’s dimunitive tiki taka wizards (as well as enjoying a beneficial continuity of Barca’s philosophy at the national level), therefore having very little to do.

There has always been an auror of pessimism anywhere he’s been, despite always delivering. Maybe it is because he does not have the phenotype of ‘the media’s favourite’ – because he never attracts controversy or looks like the monolithic figure his that the high profile positions he has occupied is used to. Maybe his efforts – like keeping a winning team in winning mode, or achieving with a star studded side – have not been the sort of efforts that is on the surface, easily seen and praised. Maybe his hardwork has always been eclipsed by certain circumstances through no fault of his.

Due to all this perhaps, despite his stunning achievements, the calm, unassuming Salamanca born manager of the Spanish National team – a team already heralded as the greatest ever –  hardly ever receives the kind of media spotlight that, say, Guardiola or Mourinho receive today.

But that is not, and has never been, a problem for the famously moustachioed 61 year old. In fact, he refers it that way, he loves the quiet away from the media lens. And he could not care less about being criminally downplayed and underrated. His immense success speaks for itself.

But how did it all begin for him? Well, his journey towards the pinnacle of success began in 1999, with an unusual first season.  A first season that captured his familiarity with the concept of the underdog, and of achieving against the odds. A first season I’m sure, he’ll always look back on with nostalgia.

Humble beginnings

During his playing days, he was a midfielder. His most notable period was with the club dear to his heart – Real Madrid. He played in Madrid for 14 years, between 1970 and 1984, winning 5 La Ligas and 4 Copa Del Reys. After that spell he worked diligently behind the scenes for almost 16 years, during which he coached the Real Madrid B side, and at times handled the first team on an interim bases during times that there were no substantive managers(11 matches in 1994 and 1 match in 1996)

The man, once described in a 2003 BBC article as being ”as cool as a cryogenically frozen cocumber”,  never rushed. He was patient, working hard and taking all his chances as and when they came. He knew he would one day eventually end up in the manager’s seat at the Bernabeu on a full-time bases. Managers came and left, and humble Del Bosque was remained behind the scenes, learning, waiting.

Breakthrough

And then it came. His time. His opportunity. On the 17th day of November 1999. The board at Real led by Lorenzo Sanz – after having problems with manager John Toshack and his non performance – felt it was time to shake things up on the technical bench, and finally time to give Del Bosque his chance. Real Madrid had been managed by a staggering 7 managers in three years. The club sought some sort of stability. There was a need to secure the services of an astute trainer for the long term. Debts were also piling up. There was the need for success. The board turned to modest Del Bosque , and he did not turn them down. He officially assumed the most popular hot seat in football on the 18th day of November, 1999.

It wasn’t exactly a high profile appointment. He wasn’t the most popular of candidates. But the board felt they had to try something new. Just like how Barcelona recruited Guardiola or Inter did Strammacioni. He had not been a manager at the top level for a full season before. Experience did not favour him. It was basically a gamble. But Del Bosque had been working with the club for almost all of his life. He knew the club well, he loved it. Above all, he was hardworking.

Tough task

He had a tough job to do. John Toshack had drawn and lost most of the league games at to that point, and the team was sitting 8th on the table. There was also the Champions league, and qualification to the next round from the second group stage (Toshack had already qualified the team from the first group stage). And there was the Copa Del Rey too. The task was ginormous, and the then 48 year old Del Bosque had been thrown in at the deep end. Even though he was a faithful Madridista through and through, there was no way he was going to evade the sack if he messed up. Politics at Real meant Lorenzo Sans was virtually betting his presidential future on Del Bosque. It was more or less make or break.

He got to work in earnest, trying to juggle the demands of all three competitions and their accompanying expectations. But he held his own, remained focused, and sought to deliver.

The rookie’s success

Del Bosque finished the 1999/00 La Liga season in fifth place – a position which would have been normally disastrous for a club like Real Madrid – but it was not.

Why? They achieved a points tally of 62, only 7 points behind champions Deportivo La Coruna, impressive, considering how bad they started the season. Also, 5th position then, meant Champions League qualification – which in fact they found out they wouldn’t need, because…..

……they went on to win the Champions League itself, beating fellow Spanish club convincingly in the final, with a 3-0 win. This was after qualifying narrowly from the second group phase(above third placed Dynamo Kyiv via head to head), and subsequently flooring their quarter and semi final opponents.

It became their second triumph in four seasons. Interestingly, Del Bosque also reached the semi final of the Copa Del Rey, only losing to eventual winners Espanyol. The man who took over in medes res, amidst poor performances and instability, united the club, raised their game, and went on to secure the biggest trophy in club football. And this was all done in his first full season in his top level management career. This was, also done at the biggest, most successful club in the history of football, where the pressure is unimaginable.

The first chapter of a remarkable success story had been written.

Don Vicente went on to win 6 more trophies in his next three seasons at the helm, including another European Cup in 2002 as well as two La Liga titles, in what became the club’s second most successful era.

Setanta

Author Info

Fiifi Anaman

Fiifi Anaman is a young freelance football writer from Ghana. Writes for Goal.com Ghana, Full-TimeWhistle.com amongst other outlets. Occasionally talks about football on radio.

This entry was posted by is filed under Featured, La Liga and Tags: , , . You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Leave a Reply

via Back Page Football http://backpagefootball.com/rewind-how-it-all-started-for-del-bosque/48811/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BackPageFootball+%28Back+Page+Football%29

Leave a comment

Filed under Auto