Hi folks,
It’s time to wrap up everyone’s favorite annual tradition – the TOA Book Award, or as it’s more commonly known, the Most Irrelevant Prize in World Literature.
We’re down to six finalists:
Daily Rituals by Mason Currey
Bring The Noise by Raphael Honigstein
Gridiron Genuis by Michael Lombardi
The Prophet by Kahlil Gibran
Skin in the Game by Nassim Nicholas Taleb
Little Panic by Amanda Stern
Today, we'll eliminate three finalists from contention for the final prize. But before proceeding, a note - after years of confusion, I've decided to align the year of the award with the calendar year during which I make the decision. In other words, this is the 2019 award, but it's for a book I read in 2018.
Make sense?
No?
Good!
Daily Rituals by Mason Currey (December 2018)
Longtime readers will surely be exhausted of references to this book, Mason Currey’s investigation into the routines and schedules of various creative types throughout history. The main idea I took from this book was the importance of discovering a personal rhythm for creative work, and then pouring energy into protecting that time. Whether an artist works in structured ninety-minute increments, likes to block out a few hours every morning, or prefers all-night benders fueled by sudden inspiration, the key isn’t really in the details on the calendar. Rather, it’s a deep understanding of the self that is reflected in the way an artist uses his or her most valuable resource – time.
Parting thought: Creativity is variation within repetition.
I liked this thought the most out of many other interesting observations because I feel creativity is a concept all too easy to talk knowingly about without ever stopping to consider what it actually means. The reality is that on a planet of seven billion, someone else has likely come up with all of your good ideas (and even most of your bad ones). Creativity isn’t originality in the sense of pulling some new idea out of the air – rather, it’s a process of connecting existing people, places, and ideas in ways that don’t happen naturally. In a certain sense, creativity is a leadership act, and perhaps the most difficult one. The artist in full control of his or her creative powers is more likely to find new variation in the pattern others take for granted and these often become the moments we look back on as the inspirations that changed the world.
Skin in the Game by Nassim Nicholas Taleb (July 2018)
Taleb’s newest book examines the consequences when systems lose the symmetry of their risk transfers. If the person in charge faces no consequences in the event of failure, how does the system change? The answer is in many ways obvious but as readers of his past work know there is much more to it than the simple answer.
Best idea: In a probabilistic sense, volatility and time are equivalent.
I hadn’t thought much about this lately but as I reviewed my notes on this book it was this idea that jumped out to me. The big themes in this work are much easier to understand through this lens. Over a long period, it isn’t so much the size of the risk in one instance but rather the number of instances that determines the likelihood of ruin.
Also a good idea: Investors do not match market gains if losses force a reduced position.
This is Taleb's ‘if-then’ thinking at its best. Most grasp the power of bad luck evening out and will at least pay lip service to riding out a tough time. However, the question isn’t about resolve but rather about resilience. If you know the rain will force you inside, you shouldn't invest in a way that will likely expose you to the occasional market thunderstorm.
Parting thought: An individual with a sufficient level of intolerance can change default preferences for the entire society.
This was an idea I (sort of) explained in this post. I sense a lot of big changes on the horizon and this idea seems to explain just how much power one person has in the equation. The speed of the change is proportional to the cost of changing preferences but this detail is irrelevant when talking about major changes that will happen slowly over the long term.
Gridiron Genuis by Michael Lombardi (November 2018)
Longtime TOA favorite Lombardi makes his writing debut in this work about leadership in NFL football. Like with many of the sports books I highlight on TOA, I felt many of this work’s big ideas translated beyond the helmet football field. Whether it was in his comment about the importance of simplicity as a driver of improved execution via repetition, his definition of growth as discovering new ways to do the same things, or his insistence that leadership starts by expertly defining what everyone in the organization must do, Lombardi’s clear thinking throughout this book shines a light on the basic fundamentals that define strong leadership in any context.
Food for thought: As I regain my Business Bro swagger in a new role, I should remember to keep in mind Lombardi’s point that a great fit for any system is a person whose abilities are maximized in the current scheme while also fueling the evolution of the system.
Parting thought: The coach’s job is to understand how each player retains information, and then tailor the teaching methods accordingly.
Lombardi notes that Bill Belichick’s offseason planning included time for the coaching staff to assess and adjust their teaching methods. This speaks to the broader point about the importance of understanding the way players retain information. Each player learns in both different ways and at different speeds – the best coaches can handle the largest variation in these combinations to get the most out of each player.
Showing posts with label books - skin in the game. Show all posts
Showing posts with label books - skin in the game. Show all posts
Sunday, December 8, 2019
Sunday, May 26, 2019
reading review - skin in the game (riff offs, part 2 - the effect of the unseen)
Reader, today the long journey through my notes, thoughts, and observations about Nassim Nicholas Taleb’s Skin in the Game concludes at last. If you’ve made it this far, I congratulate you. And if you’ve enjoyed the posts, I humbly recommend reading the book itself.
Of course, you may be protesting at the moment – why read the book after such an exhaustive series of reviews? I agree with the thought, or at least the spirit of it, but in that case I’ll suggest one of his earlier works. A major shared theme among those books is the effect of the unseen or overlooked and I’m sure those who’ve enjoyed these posts will find plenty to like about those prior works.
Today’s post looks over some thoughts along that same theme from Skin in the Game – the effect of the unseen or overlooked.
Historians often create their own problems by relying too much on observed events and not considering enough the impact of the unseen.
This comment rings a little true for me. When I joke around here that there is always a reason and The Real Reason, I think I’m getting at a tendency people have to make firm conclusions based on what they observe rather than try to get the full context of what remains left to learn.
But what is the historian supposed to do, study the unseen? This would turn ‘the unseen’ into ‘the seen’, right? How would a historian access the unseen? Perhaps this thought probably says more about a person’s attitude towards the subject of history (and its perceived use) rather than towards the historian.
On a semi-related note, one of the funniest job descriptions I’ve ever heard was when my friend described a historian as someone who ‘knows what happened’.
Though it seems a contradiction that the Vatican goes to the doctor before turning to prayer, recall that voluntary death is banned by the faith.
If you like this type of thought, you must be the sort of person who gets excited for new Taleb books…
A degree in a certain discipline is likely BS if the prestige of the school is closely tied to the value of the degree.
…and this thought is the type that upsets readers once they get their hands on the new Taleb book.
Systems often collapse long before any structural defects can be cured.
This comment brings me back to his first bestselling book, The Black Swan, and one of the main premises from the work – instead of trying to predict inherently unpredictable events, we should try to find ways to mitigate the negative effects of such events. Countries in earthquake zones, for example, don’t predict the next tremor and evacuate two days before the coming quake – they design resilient buildings, educate the population on how to respond during an emergency, and conserve resources to help the hardest hit victims.
I think a common tendency is to look at a system’s defects and consider ways to fix them. This is an idea I like in theory but in practice a system’s defects are usually accepted evils that enable its strengths. My expectation is that those who attempt to fix such defects within a system often encounter a lot of unexpected resistance from those who stand to lose the most from the repairs. A more productive approach might be to learn the negative effects produced by a system’s defects and proactively find ways to limit the influence of those defects.
Isolating one variable and studying how subjects respond to minute changes in the cost or risk burden is not a rigorous way to study attitudes toward risk. For most, this decision is lumped together will all other attitudes or exposures to risk. A person who continues to take on or renew a small, ‘one-off’ risk will eventually face ruin.
This thought gets at the difficult task researchers set for themselves of trying to isolate factors in order to best understand people’s attitudes and preferences. Someone who keeps a fire extinguisher at bedside probably has a very different attitude towards risk than someone who doesn’t but this difference is unlikely to be reflected in behavioral or consumer responses to price changes in fire extinguishers.
Comparing the multiplicative with the independent leads to great distortions in understanding risk. One person killed by a bookcase at home is not the same as one person dying from contracting a highly contagious illness because the observation of the latter greatly increases the probability of another person suffering the same fate.
This was my favorite thought from the book and one I referenced a number of times in my many posts about Skin in the Game. It speaks to the difference between actions and interactions (one of the subtle themes from the book) and underscores the importance of knowing how to determine what will happen next given some event in the present.
A story of someone crashing a car is a tragedy but most such stories go ignored because yesterday’s crash has essentially no effect on the chances of you crashing if you go for a drive today. On the other hand, the panicked report about a shark attack (or even a shark sighting) always leads the news because it represents a slightly greater chance that you might get eaten the next time you go for a swim.
If more people die on the road than in the ocean, maybe we should lock up cars instead of sharks, and put them all in parks, where we can go and view them…
OK, fine, that wasn’t from Skin in the Game, it was from TOA favorite Courtney Barnett’s ‘Dead Fox’, who apparently in addition to being an excellent guitar player and songwriter also possesses a highly developed understanding of risk.
But who better to get the last word in a riff off?
Thanks for reading.
Tim
Of course, you may be protesting at the moment – why read the book after such an exhaustive series of reviews? I agree with the thought, or at least the spirit of it, but in that case I’ll suggest one of his earlier works. A major shared theme among those books is the effect of the unseen or overlooked and I’m sure those who’ve enjoyed these posts will find plenty to like about those prior works.
Today’s post looks over some thoughts along that same theme from Skin in the Game – the effect of the unseen or overlooked.
Historians often create their own problems by relying too much on observed events and not considering enough the impact of the unseen.
This comment rings a little true for me. When I joke around here that there is always a reason and The Real Reason, I think I’m getting at a tendency people have to make firm conclusions based on what they observe rather than try to get the full context of what remains left to learn.
But what is the historian supposed to do, study the unseen? This would turn ‘the unseen’ into ‘the seen’, right? How would a historian access the unseen? Perhaps this thought probably says more about a person’s attitude towards the subject of history (and its perceived use) rather than towards the historian.
On a semi-related note, one of the funniest job descriptions I’ve ever heard was when my friend described a historian as someone who ‘knows what happened’.
Though it seems a contradiction that the Vatican goes to the doctor before turning to prayer, recall that voluntary death is banned by the faith.
If you like this type of thought, you must be the sort of person who gets excited for new Taleb books…
A degree in a certain discipline is likely BS if the prestige of the school is closely tied to the value of the degree.
…and this thought is the type that upsets readers once they get their hands on the new Taleb book.
Systems often collapse long before any structural defects can be cured.
This comment brings me back to his first bestselling book, The Black Swan, and one of the main premises from the work – instead of trying to predict inherently unpredictable events, we should try to find ways to mitigate the negative effects of such events. Countries in earthquake zones, for example, don’t predict the next tremor and evacuate two days before the coming quake – they design resilient buildings, educate the population on how to respond during an emergency, and conserve resources to help the hardest hit victims.
I think a common tendency is to look at a system’s defects and consider ways to fix them. This is an idea I like in theory but in practice a system’s defects are usually accepted evils that enable its strengths. My expectation is that those who attempt to fix such defects within a system often encounter a lot of unexpected resistance from those who stand to lose the most from the repairs. A more productive approach might be to learn the negative effects produced by a system’s defects and proactively find ways to limit the influence of those defects.
Isolating one variable and studying how subjects respond to minute changes in the cost or risk burden is not a rigorous way to study attitudes toward risk. For most, this decision is lumped together will all other attitudes or exposures to risk. A person who continues to take on or renew a small, ‘one-off’ risk will eventually face ruin.
This thought gets at the difficult task researchers set for themselves of trying to isolate factors in order to best understand people’s attitudes and preferences. Someone who keeps a fire extinguisher at bedside probably has a very different attitude towards risk than someone who doesn’t but this difference is unlikely to be reflected in behavioral or consumer responses to price changes in fire extinguishers.
Comparing the multiplicative with the independent leads to great distortions in understanding risk. One person killed by a bookcase at home is not the same as one person dying from contracting a highly contagious illness because the observation of the latter greatly increases the probability of another person suffering the same fate.
This was my favorite thought from the book and one I referenced a number of times in my many posts about Skin in the Game. It speaks to the difference between actions and interactions (one of the subtle themes from the book) and underscores the importance of knowing how to determine what will happen next given some event in the present.
A story of someone crashing a car is a tragedy but most such stories go ignored because yesterday’s crash has essentially no effect on the chances of you crashing if you go for a drive today. On the other hand, the panicked report about a shark attack (or even a shark sighting) always leads the news because it represents a slightly greater chance that you might get eaten the next time you go for a swim.
If more people die on the road than in the ocean, maybe we should lock up cars instead of sharks, and put them all in parks, where we can go and view them…
OK, fine, that wasn’t from Skin in the Game, it was from TOA favorite Courtney Barnett’s ‘Dead Fox’, who apparently in addition to being an excellent guitar player and songwriter also possesses a highly developed understanding of risk.
But who better to get the last word in a riff off?
Thanks for reading.
Tim
Friday, May 17, 2019
reading review - skin in the game (recurring risk)
In a couple of my posts about Skin in the Game, I referenced the importance of differentiating between contagious risk and independent risk. Today, I wanted to look at a similar idea – recurring risk – because I feel it represents one of the most significant themes in the book.
Let’s start with a common piece of investment wisdom that I’ve passed along myself once or twice around these parts – the magical index fund. The basic theory is simple – an investor who buys an index fund will earn matching returns to the market. And since the market tends to generate positive returns over time, such an investor is almost sure to do well over a long period. In fact, if the investment is made over several decades, the index fund basically ensures the best possible balance of low risk and high return.
Sounds good, right? There is a catch, of course, and this is where recurring risk comes in. If an investor remains in the market for a long enough time, there will eventually be a rough period or two. The only way an investor will not earn a matching return to the market is if these losses force a reduction in market position. This might not sound like such a problem at the time the investment is made and it is easy to resolve austerity in the future (if times get tough, I’ll tighten the belt elsewhere without touching the investment) but this stance overlooks the strong likelihood that tough times will go hand-in-hand with a declining market. It’s hard enough to invest money during a good economy – when the economy slows down (or even goes into a recession) it becomes even harder to justify putting money aside when it might be needed for more urgent short term purchases.
An investment survives if the investor limits exposure to risk so that losses do not lead to a reduction in position. Therefore, the longer the time horizon for the investment, the more important it is to consider exposure to risk. Otherwise, the investment is sure to give way at some point to a short-term cash need resulting from one of the many ‘unexpected’ personal finance crises that emerge over the course of several decades. To put it in more general terms, survival means being able to hold a position in the face of volatility.
The math behind this line of thinking suggests that in terms of probabilities the relationship between risk, volatility, and time are roughly equivalent. An investment with a small risk of major losses – let’s say 1% – likely generates a small positive return most of the time. However, since investors generally hold their positions in the case of any small positive return, then this strategy is the equivalent of saying – I won’t make a decision about my position until I suffer a major loss. How confident are you going to be in your position if the most recent event was the biggest financial loss of your life? Put this together with the likelihood that these losses tend to coincide with economic downturns, slowdowns, or crashes (and all the short term financial pressure these events create) and you can begin to see why it is far more difficult than advertised to earn a matching return to the market.
When an investor is unable to hold a position no matter what, then in a probabilistic sense the strategy over time is roughly equivalent to being a guaranteed failure. In layman terms, this is like walking into a casino, betting it all on red, and resolving to continue putting all your winnings on red until you are wiped out. It might not happen right away, but no matter how you spin it it’s only a matter of time.
Let’s start with a common piece of investment wisdom that I’ve passed along myself once or twice around these parts – the magical index fund. The basic theory is simple – an investor who buys an index fund will earn matching returns to the market. And since the market tends to generate positive returns over time, such an investor is almost sure to do well over a long period. In fact, if the investment is made over several decades, the index fund basically ensures the best possible balance of low risk and high return.
Sounds good, right? There is a catch, of course, and this is where recurring risk comes in. If an investor remains in the market for a long enough time, there will eventually be a rough period or two. The only way an investor will not earn a matching return to the market is if these losses force a reduction in market position. This might not sound like such a problem at the time the investment is made and it is easy to resolve austerity in the future (if times get tough, I’ll tighten the belt elsewhere without touching the investment) but this stance overlooks the strong likelihood that tough times will go hand-in-hand with a declining market. It’s hard enough to invest money during a good economy – when the economy slows down (or even goes into a recession) it becomes even harder to justify putting money aside when it might be needed for more urgent short term purchases.
An investment survives if the investor limits exposure to risk so that losses do not lead to a reduction in position. Therefore, the longer the time horizon for the investment, the more important it is to consider exposure to risk. Otherwise, the investment is sure to give way at some point to a short-term cash need resulting from one of the many ‘unexpected’ personal finance crises that emerge over the course of several decades. To put it in more general terms, survival means being able to hold a position in the face of volatility.
The math behind this line of thinking suggests that in terms of probabilities the relationship between risk, volatility, and time are roughly equivalent. An investment with a small risk of major losses – let’s say 1% – likely generates a small positive return most of the time. However, since investors generally hold their positions in the case of any small positive return, then this strategy is the equivalent of saying – I won’t make a decision about my position until I suffer a major loss. How confident are you going to be in your position if the most recent event was the biggest financial loss of your life? Put this together with the likelihood that these losses tend to coincide with economic downturns, slowdowns, or crashes (and all the short term financial pressure these events create) and you can begin to see why it is far more difficult than advertised to earn a matching return to the market.
When an investor is unable to hold a position no matter what, then in a probabilistic sense the strategy over time is roughly equivalent to being a guaranteed failure. In layman terms, this is like walking into a casino, betting it all on red, and resolving to continue putting all your winnings on red until you are wiped out. It might not happen right away, but no matter how you spin it it’s only a matter of time.
Tuesday, May 14, 2019
reading review - skin in the game (the smart businessman)
One of the big questions I’ve wrestled with over the past few years is the appropriate value to place on someone’s education in the context of success. When someone succeeds, is it right to immediately credit their education? Or is there more to success than someone’s diplomas?
In Skin in the Game, Nassim Nicholas Taleb makes a strong argument that in the context of business success people generally place too much value on education. For him, smart people sometimes turn the attitude they’ve developed through education into obstacles for their own success. This is because an educated person is accustomed to understanding and explaining ideas before getting any form of validation for their work. If understanding must come before application, the educated person might miss an idea that others with less education will jump on right away simply because the educated person will wait until understanding the idea before taking initiative.
A version of the above that Taleb cites is how educated people tend to shy away from ideas they perceive as stupid. He notes that smart people fail in business if their tendency to dismiss or merely criticize what they do not understand prevents them from ever recognizing that if something they perceive as stupid still works (as in, makes money) it cannot actually be stupid (from a business perspective, at least, and within a reasonable time period). At the very least, it cannot have been stupid, given how the goal in business is to make money and the so-called 'stupid' idea made money.
Taleb goes on to further consider the implications of how education nudges students toward explanation rather than action. When such people get into business, are they more likely to try anything just to see what works or are they more likely to try the small handful of things they are able to explain? Although there is no perfect approach, it is difficult to think of successful businesses that followed the blueprint its very smart founders drew up one day one.
The business context Taleb uses to consider the importance of education helped me think more deeply about my own question. The kind of intelligence emphasized through education requires understanding before application. The academically successful graduate who goes into business would surely want to apply this skill in the real world. But in business, sometimes things work before anyone understands it. It brings to mind an ironic scene – someone demanding data to help him or her explain the success behind a young initiative while ignoring the fact that the success itself is perhaps the most relevant data point during the early days.
Instead of worrying so much about explaining, perhaps the best approach is to consider how to mitigate the effects of failure. The ‘stupid idea’ that succeeds wildly should be studied and understood, of course, but that doesn’t mean the best thing to do is to stop leveraging the idea until an acceptable narrative is written. Instead, the business should think of ways to put safety nets in place so that if the idea suddenly does start to fail it won’t put everyone out of a job. In fact, this is what businesses should be doing all the time – putting safety nets in place to reduce the impact of errors – because this encourages employees to try new things and learn quickly from mistakes. In such an environment, it is only a matter of time before someone will make a significant discovery.
Footnotes / endnote, really / must everything tie back to politics?
0. But didn’t one of their own get elected?
Though Taleb generally steers clear of politics in this book, he does chime in with the occasional thought. He defines the concept of the educated philistine along these lines, saying that this type of person criticizes decisions others wouldn’t have made if they were smarter. Such a person may, for example, label someone who votes counter to an educated person’s preference as a ‘populist’.
In Skin in the Game, Nassim Nicholas Taleb makes a strong argument that in the context of business success people generally place too much value on education. For him, smart people sometimes turn the attitude they’ve developed through education into obstacles for their own success. This is because an educated person is accustomed to understanding and explaining ideas before getting any form of validation for their work. If understanding must come before application, the educated person might miss an idea that others with less education will jump on right away simply because the educated person will wait until understanding the idea before taking initiative.
A version of the above that Taleb cites is how educated people tend to shy away from ideas they perceive as stupid. He notes that smart people fail in business if their tendency to dismiss or merely criticize what they do not understand prevents them from ever recognizing that if something they perceive as stupid still works (as in, makes money) it cannot actually be stupid (from a business perspective, at least, and within a reasonable time period). At the very least, it cannot have been stupid, given how the goal in business is to make money and the so-called 'stupid' idea made money.
Taleb goes on to further consider the implications of how education nudges students toward explanation rather than action. When such people get into business, are they more likely to try anything just to see what works or are they more likely to try the small handful of things they are able to explain? Although there is no perfect approach, it is difficult to think of successful businesses that followed the blueprint its very smart founders drew up one day one.
The business context Taleb uses to consider the importance of education helped me think more deeply about my own question. The kind of intelligence emphasized through education requires understanding before application. The academically successful graduate who goes into business would surely want to apply this skill in the real world. But in business, sometimes things work before anyone understands it. It brings to mind an ironic scene – someone demanding data to help him or her explain the success behind a young initiative while ignoring the fact that the success itself is perhaps the most relevant data point during the early days.
Instead of worrying so much about explaining, perhaps the best approach is to consider how to mitigate the effects of failure. The ‘stupid idea’ that succeeds wildly should be studied and understood, of course, but that doesn’t mean the best thing to do is to stop leveraging the idea until an acceptable narrative is written. Instead, the business should think of ways to put safety nets in place so that if the idea suddenly does start to fail it won’t put everyone out of a job. In fact, this is what businesses should be doing all the time – putting safety nets in place to reduce the impact of errors – because this encourages employees to try new things and learn quickly from mistakes. In such an environment, it is only a matter of time before someone will make a significant discovery.
Footnotes / endnote, really / must everything tie back to politics?
0. But didn’t one of their own get elected?
Though Taleb generally steers clear of politics in this book, he does chime in with the occasional thought. He defines the concept of the educated philistine along these lines, saying that this type of person criticizes decisions others wouldn’t have made if they were smarter. Such a person may, for example, label someone who votes counter to an educated person’s preference as a ‘populist’.
Saturday, May 11, 2019
reading review - skin in the game (presentation and public appearance)
A consistent theme of Nassim Nicholas Taleb’s work is the distorting effect caused by appearance. His recently released Skin in the Game was no exception and I’ve noted a number of my favorite insights on this topic in today’s post.
First, Taleb warns the reader about people who are very easily understood when explaining a complicated idea, task or process – although these people appear knowledgeable, they are very likely bullshitting to some degree. This thought does not deny that there are great teachers capable of simplifying difficult concepts. Rather, it acknowledges that certain ideas require a degree of hard work to understand and cautions against looking for the magic bullets that appear to make this effort unnecessary.
A similar thought explores the role of outward appearances in ‘fake work’. For Taleb, criticism of presentation is a clear signal of ‘fake work’ and the more presentation is emphasized the less likely it is that the underlying idea is well understood (1). This belief perhaps explains why he feels a successful person who doesn’t quite look the part might have a richer set of skills than someone whose success is explainable by associations with certain 'appearances' - a well-groomed manner of presentation, other successful people, prestigious institutions. A person who sends no outward signals of success suggests that at the very least this person has demonstrated an ability to overcome certain entrenched prejudices or preferences in a given field to achieve success.
Footnotes / another Paul Graham reference?
1. Am I wrong?
It reminds me of an idea from this Paul Graham essay – "If a statement is false, that's the worst thing you can say about it." The implication here is that since the worst thing anyone can say about an idea is that it’s wrong, criticizing an idea along any other criteria suggests at the minimum that there is some merit to the underlying thought.
First, Taleb warns the reader about people who are very easily understood when explaining a complicated idea, task or process – although these people appear knowledgeable, they are very likely bullshitting to some degree. This thought does not deny that there are great teachers capable of simplifying difficult concepts. Rather, it acknowledges that certain ideas require a degree of hard work to understand and cautions against looking for the magic bullets that appear to make this effort unnecessary.
A similar thought explores the role of outward appearances in ‘fake work’. For Taleb, criticism of presentation is a clear signal of ‘fake work’ and the more presentation is emphasized the less likely it is that the underlying idea is well understood (1). This belief perhaps explains why he feels a successful person who doesn’t quite look the part might have a richer set of skills than someone whose success is explainable by associations with certain 'appearances' - a well-groomed manner of presentation, other successful people, prestigious institutions. A person who sends no outward signals of success suggests that at the very least this person has demonstrated an ability to overcome certain entrenched prejudices or preferences in a given field to achieve success.
Footnotes / another Paul Graham reference?
1. Am I wrong?
It reminds me of an idea from this Paul Graham essay – "If a statement is false, that's the worst thing you can say about it." The implication here is that since the worst thing anyone can say about an idea is that it’s wrong, criticizing an idea along any other criteria suggests at the minimum that there is some merit to the underlying thought.
Tuesday, May 7, 2019
reading review - skin in the game (the intolerant state)
Back in August, I posted a link to an article about fascism’s threat to modern society. The basic premise of the piece suggested that fascism would rise as society became acclimated to examples of increasingly brutal or inhumane treatment of the innocent, powerless, or vulnerable. It then went on to point out some examples from current events that underscored the premise.
I’ve written a few posts lately about intolerance in the context of Nassim Nicholas Taleb’s Skin in the Game and the process of preparing those posts made me think back to this article. More specifically, I thought about Taleb’s insights into intolerant minorities and wondered how the process of a larger group adopting the minority’s preferences help explain some of the political events of the day.
An important aspect of Taleb’s thinking is that minority preferences thrive in multiplicative environments. In these environments, when one person in a group refuses to go along with everyone else the group is forced to either adopt the minority preference or to splinter into factions defined by the difference in preference. This process allows societies to ‘adopt’ a minority preference even if the majority is not strongly in favor.
The most dangerous aspect of fascism is how it leverages this pattern to grant outsized power to those who hold the minority preference. Taleb cites one aspect of Nazi Germany to make this point – one person could turn in dozens of hiding victims with just one report while it would take dozens of people working together to hide just a few victims. The article I shared suggested that historically a fascist regime needed around 40% support before it could consolidate power – based on what I read in Skin in the Game, it seems to me like far less than 40% would be required.
The key underlying idea (and one I’ll cover in more detail in an upcoming post) is contagion. A minority preference in a contagious environment has the potential to completely alter the normal preferences within that environment. People intuitively understand the significance of a contagious threat. A good example of this understanding is the difference in how people react to news about someone contracting a highly contagious illness – since just one other person becoming sick increases everyone’s chances of becoming ill, such news is always delivered with grave concern (and occasionally a mild strain of panic). On the other hand, though thousands of people die in car accidents every year, accidents are independent of each other (that is, one accident yesterday doesn't increase my odds of having one today) and therefore such news reports are rarely delivered with the air of a public health warning.
Fortunately – at least for those mildly opposed to fascism – the loose organization of the USA slows the spread of contagious ideas and reduces the chances of a minority preference becoming a societal norm. The key to the balance is the diversity of preferences that emerge wherever ideas, lifestyles, and ambitions must coexist. Without this mixture of preferences, uniformity of thought emerges and the threat of contagious minority preference taking root in a society becomes very real.
I’ve written a few posts lately about intolerance in the context of Nassim Nicholas Taleb’s Skin in the Game and the process of preparing those posts made me think back to this article. More specifically, I thought about Taleb’s insights into intolerant minorities and wondered how the process of a larger group adopting the minority’s preferences help explain some of the political events of the day.
An important aspect of Taleb’s thinking is that minority preferences thrive in multiplicative environments. In these environments, when one person in a group refuses to go along with everyone else the group is forced to either adopt the minority preference or to splinter into factions defined by the difference in preference. This process allows societies to ‘adopt’ a minority preference even if the majority is not strongly in favor.
The most dangerous aspect of fascism is how it leverages this pattern to grant outsized power to those who hold the minority preference. Taleb cites one aspect of Nazi Germany to make this point – one person could turn in dozens of hiding victims with just one report while it would take dozens of people working together to hide just a few victims. The article I shared suggested that historically a fascist regime needed around 40% support before it could consolidate power – based on what I read in Skin in the Game, it seems to me like far less than 40% would be required.
The key underlying idea (and one I’ll cover in more detail in an upcoming post) is contagion. A minority preference in a contagious environment has the potential to completely alter the normal preferences within that environment. People intuitively understand the significance of a contagious threat. A good example of this understanding is the difference in how people react to news about someone contracting a highly contagious illness – since just one other person becoming sick increases everyone’s chances of becoming ill, such news is always delivered with grave concern (and occasionally a mild strain of panic). On the other hand, though thousands of people die in car accidents every year, accidents are independent of each other (that is, one accident yesterday doesn't increase my odds of having one today) and therefore such news reports are rarely delivered with the air of a public health warning.
Fortunately – at least for those mildly opposed to fascism – the loose organization of the USA slows the spread of contagious ideas and reduces the chances of a minority preference becoming a societal norm. The key to the balance is the diversity of preferences that emerge wherever ideas, lifestyles, and ambitions must coexist. Without this mixture of preferences, uniformity of thought emerges and the threat of contagious minority preference taking root in a society becomes very real.
Thursday, April 25, 2019
reading review - skin in the game (intolerance changes the world)
In my most recent post about Skin in the Game, I described an idea about how intolerant minorities can change a society’s preferences. Astute readers may recall that I made this point by adapting an example from the book into a strange cross between an extended metaphor and a parable. Let’s look a little more closely at this idea today without the aid of my assured hypothetical scenario.
Taleb’s main point is that small minorities can force the majority to accept their preferences provided the presence of two factors. The first factor is that the cost of changing preferences is not too high. There are lots of examples out there (that have nothing to do with melons) that illustrate the point. The basic way to identify these instances is to think about widespread behaviors that started with just a few early adopters and became increasingly popular as the cost of joining these pioneers reduced. When I think about the commonness of once-fringe ideas like recycling, text messaging, or online shopping, I note that they became ubiquitous as the cost of adopting these behaviors decreased.
Not all minority preferences are destined to become tomorrow’s norms. I don't think gender-neutral single restroom units, for example, will replace all segregated restrooms anytime soon. And it is true that cost is probably a factor here - I imagine the spending on plumbing infrastructure required to convert all restrooms to single-room units would sink most building budgets. But the long-term reason why I think this preference is unlikely to become ubiquitous brings me to the second factor – a minority must also be sufficiently intolerant for their preference to take hold. In this example, I suspect it is very difficult for the minority to be intolerant enough – the risk of permanent health problems is simply too high for the minority to refuse to use almost all restrooms on the grounds that the omnipresent setup is discriminatory (1).
Of the two factors, I found the ideas about intolerance much more interesting than the insights about cost. I suppose this makes some sense – as a former economics major, I’ve had plenty of experience thinking about the role of costs in everyday decisions. Quite frankly, these days I find myself a little bored by the topic (usually, it just comes down to how someone else estimated a few numbers, then picking the smaller/larger one as appropriate). Intolerance, on the other hand, is a topic I know little about despite seeming to hear about it every ten minutes these days. Based on what I hear and combining it with some of my basic assumptions, I would have guessed tolerance drives societal change while intolerance was the force that kept the status quo humming along. But the most interesting aspect of Taleb’s comments is how it reverses my intuition about tolerance and intolerance – to change the world, be intolerant.
Footnotes / justifying nonsense
1. Just giving it my best shot here, folks…
I admittedly don’t have a great grasp of all the details as it relates to gender-neutral restrooms and I don't want to give off a false impression of expertise on the matter. I just want to keep focused on the idea that minority preferences can become the majority’s norm given a low switching cost and a sufficient level of intolerance from the minority. If the costs of building these restrooms became sufficiently low, I could see this moving closer to the norm because anyone willing to use a segregated restroom should also be happy to use a sufficiently private alternative.
When I thought more about what I wrote here, I realized that this is a function of the law that I never heard articulated before – to codify a minority preference that would benefit all of society yet lacks the required criteria of low cost and sufficient intolerance as outlined by Taleb.
Taleb’s main point is that small minorities can force the majority to accept their preferences provided the presence of two factors. The first factor is that the cost of changing preferences is not too high. There are lots of examples out there (that have nothing to do with melons) that illustrate the point. The basic way to identify these instances is to think about widespread behaviors that started with just a few early adopters and became increasingly popular as the cost of joining these pioneers reduced. When I think about the commonness of once-fringe ideas like recycling, text messaging, or online shopping, I note that they became ubiquitous as the cost of adopting these behaviors decreased.
Not all minority preferences are destined to become tomorrow’s norms. I don't think gender-neutral single restroom units, for example, will replace all segregated restrooms anytime soon. And it is true that cost is probably a factor here - I imagine the spending on plumbing infrastructure required to convert all restrooms to single-room units would sink most building budgets. But the long-term reason why I think this preference is unlikely to become ubiquitous brings me to the second factor – a minority must also be sufficiently intolerant for their preference to take hold. In this example, I suspect it is very difficult for the minority to be intolerant enough – the risk of permanent health problems is simply too high for the minority to refuse to use almost all restrooms on the grounds that the omnipresent setup is discriminatory (1).
Of the two factors, I found the ideas about intolerance much more interesting than the insights about cost. I suppose this makes some sense – as a former economics major, I’ve had plenty of experience thinking about the role of costs in everyday decisions. Quite frankly, these days I find myself a little bored by the topic (usually, it just comes down to how someone else estimated a few numbers, then picking the smaller/larger one as appropriate). Intolerance, on the other hand, is a topic I know little about despite seeming to hear about it every ten minutes these days. Based on what I hear and combining it with some of my basic assumptions, I would have guessed tolerance drives societal change while intolerance was the force that kept the status quo humming along. But the most interesting aspect of Taleb’s comments is how it reverses my intuition about tolerance and intolerance – to change the world, be intolerant.
Footnotes / justifying nonsense
1. Just giving it my best shot here, folks…
I admittedly don’t have a great grasp of all the details as it relates to gender-neutral restrooms and I don't want to give off a false impression of expertise on the matter. I just want to keep focused on the idea that minority preferences can become the majority’s norm given a low switching cost and a sufficient level of intolerance from the minority. If the costs of building these restrooms became sufficiently low, I could see this moving closer to the norm because anyone willing to use a segregated restroom should also be happy to use a sufficiently private alternative.
When I thought more about what I wrote here, I realized that this is a function of the law that I never heard articulated before – to codify a minority preference that would benefit all of society yet lacks the required criteria of low cost and sufficient intolerance as outlined by Taleb.
Wednesday, April 17, 2019
reading review - skin in the game (the melon metaphor)
A thought provoking section from Skin in the Game was Nassim Nicholas Taleb’s analysis of how intolerant minorities have the potential to alter a society’s default preferences. He used an example involving food preferences within a neighborhood that I will recreate almost in its entirety here.
First, suppose that you live in a family of four. Let’s say someone in the family suddenly develops an allergy to melons. The level of intolerance is severe enough that even airborne exposure will cause immediate illness. Although the rest of the family loves melons, the fruit is far from being a nutritional requirement in anyone’s diet and the family stops buying melons with little extra fuss.
Next, let’s say this family lives in a multi-unit building. One day, neighbors chat as neighbors do and it comes up that due to this new intolerance for melons the family on the first floor no longer buys the esteemed fruit. After parting ways, the upstairs neighbors reflect on this news and decide to play it safe – they, too, like melons, but it’s probably better to keep the entire property as melon-free as possible lest anyone become gravely sick thanks to one other person’s preference for a bit of fruit at lunch. The neighbors soon stop buying melons.
Finally, suppose these families live on a tight-knit block. Every year, they all get together at one house for a block party. In the process of planning the party, the news gets out that the families living in the nice duplex unit next to the dog park no longer eat melons. Oh dear, think the hosts, and they instruct the guests to leave all melons at home, which is too bad since about a third of the block’s residents describe melon as being ‘one of their favorite fruits’. But these same residents also describe 'no one dying from allergies in the neighborhood' as being among 'their favorite preferences' (or at least would do so, were anyone smart enough to ask). Given the realities of the neighborhood, these residents secure other fruits to bring to the party.
When the guests arrive at the block party, their curiosity leads to the melon allergy becoming the first topic of conversation. How bad is the allergy? No one knows, but everyone agrees that it is serious. What if someone threw out a melon on trash day and the wind picked up the scent? Let’s not think about that, everyone agrees. Or what if a dog - a homeless dog, to clarify, since the neighborhood dogs would never do anything to harm the block, those piles of dung on the sidewalk notwithstanding - what if a homeless dog got into a trash bag, pulled out a half-eaten melon, and then left it on the front steps of the duplex? Oh, the horror! By the time the party ends, no one on the block will ever buy another melon.
And the math of this result is the point of the allergy analogy - one person's intolerance of the fruit leads the entire block to stop buying melons.
First, suppose that you live in a family of four. Let’s say someone in the family suddenly develops an allergy to melons. The level of intolerance is severe enough that even airborne exposure will cause immediate illness. Although the rest of the family loves melons, the fruit is far from being a nutritional requirement in anyone’s diet and the family stops buying melons with little extra fuss.
Next, let’s say this family lives in a multi-unit building. One day, neighbors chat as neighbors do and it comes up that due to this new intolerance for melons the family on the first floor no longer buys the esteemed fruit. After parting ways, the upstairs neighbors reflect on this news and decide to play it safe – they, too, like melons, but it’s probably better to keep the entire property as melon-free as possible lest anyone become gravely sick thanks to one other person’s preference for a bit of fruit at lunch. The neighbors soon stop buying melons.
Finally, suppose these families live on a tight-knit block. Every year, they all get together at one house for a block party. In the process of planning the party, the news gets out that the families living in the nice duplex unit next to the dog park no longer eat melons. Oh dear, think the hosts, and they instruct the guests to leave all melons at home, which is too bad since about a third of the block’s residents describe melon as being ‘one of their favorite fruits’. But these same residents also describe 'no one dying from allergies in the neighborhood' as being among 'their favorite preferences' (or at least would do so, were anyone smart enough to ask). Given the realities of the neighborhood, these residents secure other fruits to bring to the party.
When the guests arrive at the block party, their curiosity leads to the melon allergy becoming the first topic of conversation. How bad is the allergy? No one knows, but everyone agrees that it is serious. What if someone threw out a melon on trash day and the wind picked up the scent? Let’s not think about that, everyone agrees. Or what if a dog - a homeless dog, to clarify, since the neighborhood dogs would never do anything to harm the block, those piles of dung on the sidewalk notwithstanding - what if a homeless dog got into a trash bag, pulled out a half-eaten melon, and then left it on the front steps of the duplex? Oh, the horror! By the time the party ends, no one on the block will ever buy another melon.
And the math of this result is the point of the allergy analogy - one person's intolerance of the fruit leads the entire block to stop buying melons.
Thursday, April 11, 2019
reading review - skin in the game (show, don’t tell)
Longtime readers of Nassim Nicholas Taleb’s many bestselling books will know that he has very little patience for those who talk, talk, talk about topics they do not know a great deal about. I suppose this impatience has many possible sources and going through each possible one seems like an endless task I’ll save for another day.
One consistent factor I noticed is how Taleb dismisses talk, talk, talk anytime actions give more insight into thinking than words. For Taleb, good investment advice means telling others what is in your portfolio as opposed to telling others about all the surefire bets that you have mysteriously chosen not to make for yourself (1). Astute readers will make the connection here to the title of the book and recognize the crucial distinction between having an opinion and having a stake in the outcome.
There is a little more to this thought than linking the ‘skin in the game’ concept to some real-life applications. Taleb is also using this example to explore a much deeper observation about the tension between abstract intellectual principles and the realities of daily life. Taleb notes that when private life and intellectual opinion contradict, it is often the opinion and not the life that gets compromised. This insight draws partly from the basic truth that people sometimes provide explanations, thoughts, or beliefs about a hypothetical choice that do not hold up when the actual opportunity to make the choice comes to them.
However, the insight about this tension also recognizes how difficult it is to abide by certain hypothetical principles when these challenge real loyalties to family, friends, or community. In the face of these loyalties, no intellectual, ethical, or moral stance is safe. When Taleb points out how showing someone else your portfolio has much more significance than telling someone else what you think might be a good investment, I don’t think he is just extending the barroom logic of ‘wanna bet?’ to the realm of personal finance. I think he is also aware that people always make sound long-term recommendations in a vacuum that ignores how quickly an opinion will be compromised if a loved one asks them to do otherwise.
Footnotes / self-plugs are no plugs
1. A TOA classic?
Longtime readers may also recall this post, a simple observation I made about a friend who worked at a firm that actively managed investment portfolios.
One consistent factor I noticed is how Taleb dismisses talk, talk, talk anytime actions give more insight into thinking than words. For Taleb, good investment advice means telling others what is in your portfolio as opposed to telling others about all the surefire bets that you have mysteriously chosen not to make for yourself (1). Astute readers will make the connection here to the title of the book and recognize the crucial distinction between having an opinion and having a stake in the outcome.
There is a little more to this thought than linking the ‘skin in the game’ concept to some real-life applications. Taleb is also using this example to explore a much deeper observation about the tension between abstract intellectual principles and the realities of daily life. Taleb notes that when private life and intellectual opinion contradict, it is often the opinion and not the life that gets compromised. This insight draws partly from the basic truth that people sometimes provide explanations, thoughts, or beliefs about a hypothetical choice that do not hold up when the actual opportunity to make the choice comes to them.
However, the insight about this tension also recognizes how difficult it is to abide by certain hypothetical principles when these challenge real loyalties to family, friends, or community. In the face of these loyalties, no intellectual, ethical, or moral stance is safe. When Taleb points out how showing someone else your portfolio has much more significance than telling someone else what you think might be a good investment, I don’t think he is just extending the barroom logic of ‘wanna bet?’ to the realm of personal finance. I think he is also aware that people always make sound long-term recommendations in a vacuum that ignores how quickly an opinion will be compromised if a loved one asks them to do otherwise.
Footnotes / self-plugs are no plugs
1. A TOA classic?
Longtime readers may also recall this post, a simple observation I made about a friend who worked at a firm that actively managed investment portfolios.
Monday, April 8, 2019
leftovers #2 – skin in the game (bad reasoning: the x-factor)
The third and final common reasoning error Nassim Nicholas Taleb cites in Skin in the Game is inappropriately reducing a problem with many dimensions down to just a single factor. I think this observation is pretty straightforward and doesn’t require an extended thought from me – if a given problem has three factors to consider and you decide to oversimplify things down to one factor, well, reader, I don’t really know for sure what kind of outcome you might be expecting.
In the past, I've ranted about the opposite of this very topic by using the general expression 'debate club mentality'. There are simply some matters that do not deserve a thorough examination of each side's three best arguments. The problem I've always had in mind when criticizing the 'debate club mentality' is the possibility that training people to think this way stunts their ability to determine whether a single powerful argument is more than enough to overrule the evidence presented by any number of equally true but relatively insignificant rebuttals.
Again, Taleb's point is the opposite of mine - Taleb warns against too much simplification while I worry about adding needless complexity. Which is better? I think the best answer, as always, is that it depends. The key skill to develop in this regard isn't to pick one approach or the other, it's to become capable of knowing which method is more relevant for a given situation.
In the past, I've ranted about the opposite of this very topic by using the general expression 'debate club mentality'. There are simply some matters that do not deserve a thorough examination of each side's three best arguments. The problem I've always had in mind when criticizing the 'debate club mentality' is the possibility that training people to think this way stunts their ability to determine whether a single powerful argument is more than enough to overrule the evidence presented by any number of equally true but relatively insignificant rebuttals.
Again, Taleb's point is the opposite of mine - Taleb warns against too much simplification while I worry about adding needless complexity. Which is better? I think the best answer, as always, is that it depends. The key skill to develop in this regard isn't to pick one approach or the other, it's to become capable of knowing which method is more relevant for a given situation.
Thursday, April 4, 2019
leftovers – skin in the game (bad reasoning – actions and interactions)
In a recent post, I reviewed the way Nassim Nicholas Taleb analyzes common reasoning errors in his recently published Skin in the Game. My recent post focused on ‘the nth effect’ and the reasoning problems it can cause. Today, I want to look more into a second type of error – focusing more on actions instead of interactions.
This is a very similar concept to the nth effect problem but there are a couple of minor differences. One subtle difference is that nth effects tend to rely on a more traditional ‘cause and effect’ mode of thinking while actions-interactions consider the way an unseen or unconsidered factor might influence the way a cause or an effect would manifest itself. If the nth effect is a way to ask ‘then what?’, actions-interactions is a way to ask ‘what else?’
A good thought exercise that illustrates the difference in these kinds of thinking is to consider what might cause automobile fatalities to fall. The reasons centered on lowering the rate of automobile fatalities are good examples of nth effect thinking. A new driving instructor who is far superior to a predecessor will improve the overall skill level for all his or her students. This, in turn, leads to an increase in the general skill level among drivers that leads to safer driving overall and a lower automobile fatality rate. Other similar examples could start with a different initial event – a governor introduces a new bill to fix potholes or someone at Honda invents a better airbag – and asking a series of ‘then what?’ questions in line with nth effect thinking would lead to the same overall result – a lower rate of automobile fatalities brought about by improved road quality or better crash safety features.
A less obvious set of reasons keeps the rate of automobile fatalities constant and focuses instead on explaining why people might drive less than they did in the past. In other words, these reasons show that keeping the likelihood of a crash constant for any given trip can be irrelevant from the perspective of total automobile fatalities so long as the total number of trips is decreasing. These reasons are more about interactions than actions. If investment in public transportation rises significantly and people start trading in their cars for train passes, the number of car trips taken overall will eventually decrease. This, in turn, would lower the total number of automobile fatalities because there are less people on the road to begin with.
The tricky part of these two concepts is that it can be hard to tell when nth effect thinking ends and actions-interactions begins. This brings me to magnitude, the second difference between the two concepts. Nth effects tend to work linearly in comparison to actions-interactions and this means magnitude has a consistent effect on the outcome.The airbag example above works here because each improved airbag fractionally lowers the fatality risk. If the public benefit of installing improved airbags in every car is a 1% reduction in total fatality rates, then each improved airbag moves the total fatality rate closer to that 1% target in equal and proportional measure.
On the other hand, increased investment in public transportation does not necessarily have such a smooth impact. The magnitude of the investment is crucial because until a certain amount is invested there will be no change to driver behavior and therefore no improvement in the fatality rate. However, different levels of investment will cause the transit system to seem like a good idea for people in stages. A helpful way to think of this is by extending a train service - for each additional station build into the line, more people who live or work near the new station will trade in a car for a train pass. Unlike with the airbag example, however, there is no linear progression to these changes. If a station costs $200 million to complete, it isn't until that 200 millionth dollar is spent that commuters will make new decisions and the fatality rate will see any improvement.
Decision makers that consider the difference between nth effects and actions-interactions can find endless opportunities to maximize their time, money, and efforts. Though instinct suggests that the best investment of resources is always the one with the most direct impact on the desired outcome, recognizing the influence of hidden or subtle interactions can present much better alternatives for certain situations.
This is a very similar concept to the nth effect problem but there are a couple of minor differences. One subtle difference is that nth effects tend to rely on a more traditional ‘cause and effect’ mode of thinking while actions-interactions consider the way an unseen or unconsidered factor might influence the way a cause or an effect would manifest itself. If the nth effect is a way to ask ‘then what?’, actions-interactions is a way to ask ‘what else?’
A good thought exercise that illustrates the difference in these kinds of thinking is to consider what might cause automobile fatalities to fall. The reasons centered on lowering the rate of automobile fatalities are good examples of nth effect thinking. A new driving instructor who is far superior to a predecessor will improve the overall skill level for all his or her students. This, in turn, leads to an increase in the general skill level among drivers that leads to safer driving overall and a lower automobile fatality rate. Other similar examples could start with a different initial event – a governor introduces a new bill to fix potholes or someone at Honda invents a better airbag – and asking a series of ‘then what?’ questions in line with nth effect thinking would lead to the same overall result – a lower rate of automobile fatalities brought about by improved road quality or better crash safety features.
A less obvious set of reasons keeps the rate of automobile fatalities constant and focuses instead on explaining why people might drive less than they did in the past. In other words, these reasons show that keeping the likelihood of a crash constant for any given trip can be irrelevant from the perspective of total automobile fatalities so long as the total number of trips is decreasing. These reasons are more about interactions than actions. If investment in public transportation rises significantly and people start trading in their cars for train passes, the number of car trips taken overall will eventually decrease. This, in turn, would lower the total number of automobile fatalities because there are less people on the road to begin with.
The tricky part of these two concepts is that it can be hard to tell when nth effect thinking ends and actions-interactions begins. This brings me to magnitude, the second difference between the two concepts. Nth effects tend to work linearly in comparison to actions-interactions and this means magnitude has a consistent effect on the outcome.The airbag example above works here because each improved airbag fractionally lowers the fatality risk. If the public benefit of installing improved airbags in every car is a 1% reduction in total fatality rates, then each improved airbag moves the total fatality rate closer to that 1% target in equal and proportional measure.
On the other hand, increased investment in public transportation does not necessarily have such a smooth impact. The magnitude of the investment is crucial because until a certain amount is invested there will be no change to driver behavior and therefore no improvement in the fatality rate. However, different levels of investment will cause the transit system to seem like a good idea for people in stages. A helpful way to think of this is by extending a train service - for each additional station build into the line, more people who live or work near the new station will trade in a car for a train pass. Unlike with the airbag example, however, there is no linear progression to these changes. If a station costs $200 million to complete, it isn't until that 200 millionth dollar is spent that commuters will make new decisions and the fatality rate will see any improvement.
Decision makers that consider the difference between nth effects and actions-interactions can find endless opportunities to maximize their time, money, and efforts. Though instinct suggests that the best investment of resources is always the one with the most direct impact on the desired outcome, recognizing the influence of hidden or subtle interactions can present much better alternatives for certain situations.
Saturday, March 23, 2019
reading review - skin in the game (bad reasoning - the nth effect)
One of my favorite aspects of Skin in the Game was how Nassim Nicholas Taleb analyzes bad reasoning. For him, the problem most people have with reasoning is not a failure to understand basic cause and effect – the problem is understanding how the effect eventually becomes the cause for the next step. The typical error, he notes, is to calculate the effect of the first step correctly without accounting for how this step alters the calculations for subsequent steps.
To put it another way, almost everything in life has a ‘then what?’ element. Taleb describes this as the ‘nth order effect’ – the step that comes beyond the easily seen second, third, and even fourth step in any calculation. This explanation appealed to me because I’ve often described (though some would say oversimplified) economics in the same way – the subject is all about asking ‘then what?’ and understanding the ramifications of what happens after the initial event.
In the larger context of the book, Taleb relates the ‘then what?’ element to how a transfer of risk burden from the risk-taker to some other element of a complex system can (almost always) have negative consequences for the health of the system. One example he mentions is how carpenters were once liable for the death penalty if a house they built fell apart and caused the death of its residents. When society recognized that carpenters were always building houses as sturdily as possible, these penalties were (rightfully) repealed. From here, the ‘then what?’ line of reasoning that follows leads first to better ease of mind for carpenters – no need to worry anymore about fluke factors causing the house to fall over.
But then, with the penalty for bad work reduced, carpenters are free to build slightly riskier homes. As homes start falling over again, insurance policies become available to protect against financial liabilities. Eventually, the result of all this lets us look back and say that the change in punishment allowed carpenters the freedom to build slightly riskier homes – ironic given that the punishment was originally changed to reflect how carpenters never built risky homes.
To put it another way, almost everything in life has a ‘then what?’ element. Taleb describes this as the ‘nth order effect’ – the step that comes beyond the easily seen second, third, and even fourth step in any calculation. This explanation appealed to me because I’ve often described (though some would say oversimplified) economics in the same way – the subject is all about asking ‘then what?’ and understanding the ramifications of what happens after the initial event.
In the larger context of the book, Taleb relates the ‘then what?’ element to how a transfer of risk burden from the risk-taker to some other element of a complex system can (almost always) have negative consequences for the health of the system. One example he mentions is how carpenters were once liable for the death penalty if a house they built fell apart and caused the death of its residents. When society recognized that carpenters were always building houses as sturdily as possible, these penalties were (rightfully) repealed. From here, the ‘then what?’ line of reasoning that follows leads first to better ease of mind for carpenters – no need to worry anymore about fluke factors causing the house to fall over.
But then, with the penalty for bad work reduced, carpenters are free to build slightly riskier homes. As homes start falling over again, insurance policies become available to protect against financial liabilities. Eventually, the result of all this lets us look back and say that the change in punishment allowed carpenters the freedom to build slightly riskier homes – ironic given that the punishment was originally changed to reflect how carpenters never built risky homes.
Thursday, March 21, 2019
reading review - skin in the game (upside downside)
A simple concept best summarizes the main idea of Nassim Nicholas Taleb’s Skin in the Game – if there is upside, there must also be downside. Taleb builds on the basic foundation of this principle throughout the book and highlights many examples of its applications to a wide range of topics including complex systems, analysis, and ethics.
Let’s take a brief look at how this principle applies to complex systems. If every upside opportunity comes with a proportional downside risk, it means that no one within the system can win without bearing a fair share of downside in the event of a loss. This works just fine in theory if people who can afford the downside take the risks because the losers in this setup will not be ruined in the event of a bad outcome and will not burden innocent others with an undue share of the downside.
In the best systems, people pool their risk together and share the burden so that, collectively, upside risks can be taken without putting the safety of the system in jeopardy. This is the general setup of close-knit communities, fair insurance policies, and equitable access to affordable credit. Over time, the system collectively advances as members and groups benefit from upside without any single component of the system being ruined by downside.
The problems arise when the mechanism of risk transfer loses its inherent symmetry. Stated simply, this means a risk taker who would benefit from a win does not suffer from a loss. There are some easy high profile examples for this point. My personal favorite is the executive who pockets megabucks year-end bonuses despite the company having laid off hundreds. If the big shots can benefit even as their (former) employees suffer, how can the company expect to attract or retain top employees in the coming year? The obvious answer is that it can’t (and perhaps this explains why big companies are often dragged under by smaller, newer competitors).
Another good example that I’ve highlighted on TOA in the past is how municipalities tend to invest far more in automobile infrastructure than they do in cycling or pedestrian equivalents (in this ancient post, I cited Happy City, a book that put this ratio as high as thirty-to-one). What do we expect to happen in cities where it is far easier (and safer) to drive than it is to bike or walk? My guess: the city becomes a place where people drive more than they bike or walk. Part of the explanation is the safety benefit, of course, but a more theoretical approach suggests driving is cheaper in the sense of ‘buying upside’ - a driver gets somewhere faster and, in the event of a collision, stands a much better chance of being unharmed than a biker or walker. Over time, these places lose their sense of community and togetherness as people isolate themselves in their cars and go weeks at a time without ever interacting with a stranger.
Taleb’s overall point is that a system is in danger of crashing anytime the mechanism of risk transfer starts to lose its symmetry. The somewhat ridiculous example he cites is in commercial aviation. In the early days, a bad pilot would remove himself from the system by crashing the plane. This meant that inexperienced pilots bore all the risk while training to become a pilot. Over time, training tools such as the flight simulator meant pilots who would once have been, er, ‘ruled out’ of a job could now ‘crash’ their plane without weeding themselves out of the system. If it weren’t for the major advances in flight technology that made it easier for pilots to fly (and therefore made it possible for lesser pilots to fly without transferring risk to passengers), it is entirely possible that the flight industry would have crumbled under the weight of the accidents caused by its inexperienced pilots.
The most conversational way Taleb makes this point is when he talks about having something described as ‘good for you’. No doubt about it, many things are good for you and it would be unwise to dismiss every such opportunity out of hand. However, a good test is to make sure the person selling the idea can explain why any idea that is good for you is also good for me. If such an explanation is not forthcoming, it suggests that though you may benefit from the opportunity if things work out, you might also be on the hook for a greater share of the downside risk if things were to go wrong.
Let’s take a brief look at how this principle applies to complex systems. If every upside opportunity comes with a proportional downside risk, it means that no one within the system can win without bearing a fair share of downside in the event of a loss. This works just fine in theory if people who can afford the downside take the risks because the losers in this setup will not be ruined in the event of a bad outcome and will not burden innocent others with an undue share of the downside.
In the best systems, people pool their risk together and share the burden so that, collectively, upside risks can be taken without putting the safety of the system in jeopardy. This is the general setup of close-knit communities, fair insurance policies, and equitable access to affordable credit. Over time, the system collectively advances as members and groups benefit from upside without any single component of the system being ruined by downside.
The problems arise when the mechanism of risk transfer loses its inherent symmetry. Stated simply, this means a risk taker who would benefit from a win does not suffer from a loss. There are some easy high profile examples for this point. My personal favorite is the executive who pockets megabucks year-end bonuses despite the company having laid off hundreds. If the big shots can benefit even as their (former) employees suffer, how can the company expect to attract or retain top employees in the coming year? The obvious answer is that it can’t (and perhaps this explains why big companies are often dragged under by smaller, newer competitors).
Another good example that I’ve highlighted on TOA in the past is how municipalities tend to invest far more in automobile infrastructure than they do in cycling or pedestrian equivalents (in this ancient post, I cited Happy City, a book that put this ratio as high as thirty-to-one). What do we expect to happen in cities where it is far easier (and safer) to drive than it is to bike or walk? My guess: the city becomes a place where people drive more than they bike or walk. Part of the explanation is the safety benefit, of course, but a more theoretical approach suggests driving is cheaper in the sense of ‘buying upside’ - a driver gets somewhere faster and, in the event of a collision, stands a much better chance of being unharmed than a biker or walker. Over time, these places lose their sense of community and togetherness as people isolate themselves in their cars and go weeks at a time without ever interacting with a stranger.
Taleb’s overall point is that a system is in danger of crashing anytime the mechanism of risk transfer starts to lose its symmetry. The somewhat ridiculous example he cites is in commercial aviation. In the early days, a bad pilot would remove himself from the system by crashing the plane. This meant that inexperienced pilots bore all the risk while training to become a pilot. Over time, training tools such as the flight simulator meant pilots who would once have been, er, ‘ruled out’ of a job could now ‘crash’ their plane without weeding themselves out of the system. If it weren’t for the major advances in flight technology that made it easier for pilots to fly (and therefore made it possible for lesser pilots to fly without transferring risk to passengers), it is entirely possible that the flight industry would have crumbled under the weight of the accidents caused by its inexperienced pilots.
The most conversational way Taleb makes this point is when he talks about having something described as ‘good for you’. No doubt about it, many things are good for you and it would be unwise to dismiss every such opportunity out of hand. However, a good test is to make sure the person selling the idea can explain why any idea that is good for you is also good for me. If such an explanation is not forthcoming, it suggests that though you may benefit from the opportunity if things work out, you might also be on the hook for a greater share of the downside risk if things were to go wrong.
Saturday, March 16, 2019
reading review – skin in the game (riff offs)
Hi all,
Today, I'll riff off a couple more stand-alone ideas from Skin in the Game that I did not get to in my first post.
Animosity towards wealth often ties back to the zero-sum nature of wealth accumulation in the country in question. When wealth is created (or gained after destruction) the view is a little different. A good rule of thumb is that inequality is almost always zero-sum (because it is measured relatively) and therefore any policies that increase inequality will always be seen with suspicion.
People tend to use numbers as a substitute for solid logical arguments.
There seem to be two ways people tend to look at inequality. One method is to take the difference between top earner and the bottom earner. The other method is to ignore the top and consider what is happening at the bottom.
I think there is a good argument for the former approach but ultimately its usefulness fails when compared to the latter. One problem is that the former is prone to creative thinking about what ‘policy’ means. Our default ‘policy’ of allowing e-commerce means we all buy packages from Amazon, benefit from its superior delivery experience, and then complain when inequality goes up after the revenues accumulate with one of the world’s richest CEOs.
The approach I favor is to allow endless inequality so long as everyone at the bottom has 1) enough to live on without my being asked to help via charity, panhandling, tax breaks, and etc, and 2) everyone has the same opportunity to move up if they so choose. In other words, I think income inequality is a distraction for those who want to come up with good social policy because it allows someone to cut income inequality in half without actually helping the poor, the sick, or the hungry. When I’m hungry, I don’t care if some new policy takes away half of Warren Buffet's money, I care about getting a sandwich, and to me a good policy would bring me a sandwich regardless of how that changed some calculation of inequality. If everyone has enough, who could complain if some people ended up with a lot? I think if we lifted the bottom of the income distribution up above a humanely defined sustenance line, we can all consider ourselves free to become whatever form of capitalist a-hole we want to become.
Social friends require balance in contribution and hierarchy. A conversation is a good place to see this – if the contributions are equal, the relationship is likely to last.
This sharp insight reminds me of the lesson I drew from the otherwise forgettable Working Together – fifty-fifty, or it won’t work.
Whether formally stated or otherwise, there is an underlying expectation of equal contribution in any relationship that, if left unacknowledged for too long, can suddenly expose itself as a fatal crack in the foundation.
Relationships between countries are often conflated with relationships between governments.
I thought this was an intelligent observation yet also one that probably cannot be helped. A relationship between governments makes some sense because the groups are small enough to interact regularly if they so desire. But the concept of a relationship ‘between countries’ is essentially meaningless.
In fairness to the author, I think the main point here is that the way an ordinary citizen of one country views a counterpart of another country is often far from how that citizen’s head of state views his or her counterpart. It makes me wonder how many people around the world would assume prior to meeting me in person that I was a Trump-like (or Trump-liking) personality, just based on my citizenship.
Change for change’s sake often causes us to lose the benefits of previous changes. Evolution requires slow and steady change – any faster rate means progress is being traded for the equivalent of mutations.
Taleb’s comment exposes a relationship I’d never given much thought to previously – that between evolution and mutation. I think many of us have been involved at one time or another in a sudden change whose purpose seemed to have no point beyond the feeling that a change was needed. As I think back to my own such experiences, I can’t say these changes always worked out one way or the other.
However, there was almost always something I lost in each instance, something positive that depended on the prior condition in order for me to make anything out of it. Although in some cases perhaps the new positives outweighed the loss, I can’t look back and say this needed to be the case – with a little more care, the new could have been gained without requiring the trade-in of the old.
Today, I'll riff off a couple more stand-alone ideas from Skin in the Game that I did not get to in my first post.
Animosity towards wealth often ties back to the zero-sum nature of wealth accumulation in the country in question. When wealth is created (or gained after destruction) the view is a little different. A good rule of thumb is that inequality is almost always zero-sum (because it is measured relatively) and therefore any policies that increase inequality will always be seen with suspicion.
People tend to use numbers as a substitute for solid logical arguments.
There seem to be two ways people tend to look at inequality. One method is to take the difference between top earner and the bottom earner. The other method is to ignore the top and consider what is happening at the bottom.
I think there is a good argument for the former approach but ultimately its usefulness fails when compared to the latter. One problem is that the former is prone to creative thinking about what ‘policy’ means. Our default ‘policy’ of allowing e-commerce means we all buy packages from Amazon, benefit from its superior delivery experience, and then complain when inequality goes up after the revenues accumulate with one of the world’s richest CEOs.
The approach I favor is to allow endless inequality so long as everyone at the bottom has 1) enough to live on without my being asked to help via charity, panhandling, tax breaks, and etc, and 2) everyone has the same opportunity to move up if they so choose. In other words, I think income inequality is a distraction for those who want to come up with good social policy because it allows someone to cut income inequality in half without actually helping the poor, the sick, or the hungry. When I’m hungry, I don’t care if some new policy takes away half of Warren Buffet's money, I care about getting a sandwich, and to me a good policy would bring me a sandwich regardless of how that changed some calculation of inequality. If everyone has enough, who could complain if some people ended up with a lot? I think if we lifted the bottom of the income distribution up above a humanely defined sustenance line, we can all consider ourselves free to become whatever form of capitalist a-hole we want to become.
Social friends require balance in contribution and hierarchy. A conversation is a good place to see this – if the contributions are equal, the relationship is likely to last.
This sharp insight reminds me of the lesson I drew from the otherwise forgettable Working Together – fifty-fifty, or it won’t work.
Whether formally stated or otherwise, there is an underlying expectation of equal contribution in any relationship that, if left unacknowledged for too long, can suddenly expose itself as a fatal crack in the foundation.
Relationships between countries are often conflated with relationships between governments.
I thought this was an intelligent observation yet also one that probably cannot be helped. A relationship between governments makes some sense because the groups are small enough to interact regularly if they so desire. But the concept of a relationship ‘between countries’ is essentially meaningless.
In fairness to the author, I think the main point here is that the way an ordinary citizen of one country views a counterpart of another country is often far from how that citizen’s head of state views his or her counterpart. It makes me wonder how many people around the world would assume prior to meeting me in person that I was a Trump-like (or Trump-liking) personality, just based on my citizenship.
Change for change’s sake often causes us to lose the benefits of previous changes. Evolution requires slow and steady change – any faster rate means progress is being traded for the equivalent of mutations.
Taleb’s comment exposes a relationship I’d never given much thought to previously – that between evolution and mutation. I think many of us have been involved at one time or another in a sudden change whose purpose seemed to have no point beyond the feeling that a change was needed. As I think back to my own such experiences, I can’t say these changes always worked out one way or the other.
However, there was almost always something I lost in each instance, something positive that depended on the prior condition in order for me to make anything out of it. Although in some cases perhaps the new positives outweighed the loss, I can’t look back and say this needed to be the case – with a little more care, the new could have been gained without requiring the trade-in of the old.
Friday, March 8, 2019
i read skin in the game so you don't have to
Hello reader,
Today’s post is the first in a series about Nassim Nicholas Taleb’s newest book, Skin in the Game. Over the course of the next few weeks, I’ll take a closer look at some of the themes in this work – nth order effects, intolerant minorities, the importance of public perception, and so on.
Like was the case for most of his other books (except, I suppose, his first) Skin in the Game expands on a small portion of his prior book, Antifragile. The main idea Taleb explores in the book is risk asymmetry. This is when a person or persons who stand to benefit from the upside of an outcome do not bear a proportionate level of downside risk. I’ll look more closely at his ideas in those upcoming posts.
For today, I thought I would riff a little bit on some of his more interesting one-off ideas that did not fit neatly into any of the themes I identified.
Thanks for reading.
Tim
Employees are expensive because they must be paid during non-work hours in order to increase their availability during regular work hours.
This ties back to a concept from his prior works – the difference between a contractor and an employee. Though a well-organized system of contracted work tends to produce better results for the organization and the worker, the need of having a certain level of workers available during traditional ‘work hours’ forces the hiring of salaried employees.
To put it another way, employees tend to be grossly underpaid for the work they complete for an employer but significantly overpaid for the time they commit to the employer. This is reflected in how employees-turned-contractors often experience a significant boost in per-hour compensation when compared to their former salaried position. However, a contractor is paid fairly when not working – $0 per hour – whereas an employee’s pay rate while not working – such as during a coffee break – remains exactly the same as it does while working.
Most fail to see how increasing wealth leads to declining utility because constructed preferences emerge as financial means increases.
This was a good point about how increased means allows access to previously inaccessible goods, entertainments, or services that seem like good ideas mostly for their novelty but have very little lasting effect on well-being. It might be fun to go out for an expensive steak dinner, for example, but I’d prefer going to Sapporo Ramen ten times with the same money.
Although my exact example might not hold for all of my readers, I’m sure everyone can think of their own versions of this phenomenon.
Work that encourages you to cut corners, become more efficient, or optimize will eventually become work you dislike.
This is my favorite stand-alone insight from the book. As I think back over my various experiences, I see this pattern emerge time and again. Though there is a degree of skill development involved in optimization and mastery is often required of anyone who can improve process efficiency, work that focuses on these goals is trading quality of output for quantity of output.
My belief is that the work people find most meaningful emphasizes the latter over the former. When a person produces the highest quality work he or she is capable of, the resulting sense of satisfaction runs far deeper than is produced when increased efficiencies allow output of widgets to rise 3% when compared to the prior workday’s total.
Today’s post is the first in a series about Nassim Nicholas Taleb’s newest book, Skin in the Game. Over the course of the next few weeks, I’ll take a closer look at some of the themes in this work – nth order effects, intolerant minorities, the importance of public perception, and so on.
Like was the case for most of his other books (except, I suppose, his first) Skin in the Game expands on a small portion of his prior book, Antifragile. The main idea Taleb explores in the book is risk asymmetry. This is when a person or persons who stand to benefit from the upside of an outcome do not bear a proportionate level of downside risk. I’ll look more closely at his ideas in those upcoming posts.
For today, I thought I would riff a little bit on some of his more interesting one-off ideas that did not fit neatly into any of the themes I identified.
Thanks for reading.
Tim
Employees are expensive because they must be paid during non-work hours in order to increase their availability during regular work hours.
This ties back to a concept from his prior works – the difference between a contractor and an employee. Though a well-organized system of contracted work tends to produce better results for the organization and the worker, the need of having a certain level of workers available during traditional ‘work hours’ forces the hiring of salaried employees.
To put it another way, employees tend to be grossly underpaid for the work they complete for an employer but significantly overpaid for the time they commit to the employer. This is reflected in how employees-turned-contractors often experience a significant boost in per-hour compensation when compared to their former salaried position. However, a contractor is paid fairly when not working – $0 per hour – whereas an employee’s pay rate while not working – such as during a coffee break – remains exactly the same as it does while working.
Most fail to see how increasing wealth leads to declining utility because constructed preferences emerge as financial means increases.
This was a good point about how increased means allows access to previously inaccessible goods, entertainments, or services that seem like good ideas mostly for their novelty but have very little lasting effect on well-being. It might be fun to go out for an expensive steak dinner, for example, but I’d prefer going to Sapporo Ramen ten times with the same money.
Although my exact example might not hold for all of my readers, I’m sure everyone can think of their own versions of this phenomenon.
Work that encourages you to cut corners, become more efficient, or optimize will eventually become work you dislike.
This is my favorite stand-alone insight from the book. As I think back over my various experiences, I see this pattern emerge time and again. Though there is a degree of skill development involved in optimization and mastery is often required of anyone who can improve process efficiency, work that focuses on these goals is trading quality of output for quantity of output.
My belief is that the work people find most meaningful emphasizes the latter over the former. When a person produces the highest quality work he or she is capable of, the resulting sense of satisfaction runs far deeper than is produced when increased efficiencies allow output of widgets to rise 3% when compared to the prior workday’s total.
Subscribe to:
Posts (Atom)