Showing posts with label books - deep thinking. Show all posts
Showing posts with label books - deep thinking. Show all posts

Friday, July 5, 2019

the 2018 toa book award – preliminary round, part four

The theme for today’s eliminations is the simplest one yet – chess.

The Seven Deadly Chess Sins by Jonathan Rowson (January 2018)

I finished this dense, insightful work from grandmaster Rowson after a year of stop and go reading. Its structure enabled this because the book was cleanly divided into seven sections. Each of the seven sections focused on one common cause for defeat in chess that Rowson had identified after a lifetime in the game. As Garry Kasparov echoes in the book I highlight below, great chess players often recognize that the biggest threat to their own success lies within. The way I saw it, this truth applies not only to chess, but also to many other aspects of life.

Parting thought: A self-organizing system uses arrival sequence to determine the storage method. This is likely to result in sub-optimal storage because it fails to account for retrieval method.

This was a great insight to pull from the book because although I’ve occasionally applied the principle by accident, I’ve never had the idea stick in my mind to serve as a guide for my storage projects. It leads me to a basic observation about organization – an organized person knows how to quickly retrieve everything.

Deep Thinking by Garry Kasparov (July 2018)

Former chess world champion Kasparov is carving out an interesting post-competitive career. This book about artificial intelligence is a reflection of one aspect of his work. Kasparov’s experience of being the world’s best at the moment chess computers began to challenge top grandmasters gives him a unique perspective on the rise of machines. His observations, insights, and conclusions about our society and its increasing reliance on computer programs make for a thoughtful read.

Parting thought: Chess computers started beating humans when they took advantage of their relative strength.

One great challenge in neuroscience is developing an understanding about how people think. Until this knowledge exists, it remains impossible to ‘program’ the equivalent into a software program. This doesn’t necessarily stop programmers from trying yet it seems like the early versions of chess computers failed to reach grandmaster levels for that reason. In short, the computer fell short anytime it stopped to ‘think’ about a position in the way its programmers envisioned human thinking.

As computing power increased steadily through the 1990s, programs shifted to rely more on brute force and focus on their greater processing ability. In chess programs, this meant a focus on computing as many legal moves as possible and calculating which of these sequences put the computer in the strongest position. Computers are simply much better at counting than we are and leveraging this relative strength against top human players was the catalyst for their breakthrough against Kasparov and his contemporaries. In the final analysis, a computer’s systematic counting ability proved superior to the messier human method of analyzing positions, recognizing patterns, and adhering to overarching principles whenever faced with a novel situation.

Thursday, May 30, 2019

reading review - deep thinking (riff offs)

Hi all,

Today, I’ll wrap up my review of Garry Kasparov’s Deep Thinking with some of my responses to a few of his thoughts that I did not cover in my prior posts.

Merely leaving its life’s work behind can disrupt a fragile mind.

This universally observed phenomenon seems to emphasize that the sensation of loss in the wake of losing or stepping away from work has more to do with the subtraction of work itself rather than the loss of the natural support structure that comes with work. It might not be the fragility of a mind that is relevant; perhaps the more important concern is how the loss of the people, places, and things inseparable from the work destabilize the mind to the point of fragility.

Humans tend to downplay work ethic as a talent and often go so far as to hold the two concepts entirely separate. But isn’t work ethic just as much a form of talent?

My high school basketball coach used to say “hard work beats talent when talent doesn’t work hard.” It was even briefly printed on the back of our program's t-shirt. The expression fed into our underdog mentality and I liked how the mantra motivated our team to work.

I take a different approach to the concept today. Like Kasparov, I think hard work is a form of talent. In fact, I would even consider whether hard work is the only talent, or at least its most indispensable component. What's the point of talent if you never work hard enough to make use of it?

It kind of works like literacy. The ability to read is a talent that takes many years of effort to learn and cultivate. And yet, many joyfully waste their literacy by simply remaining too lazy to benefit from their reading ability. What’s the point of literacy if you never bother to read?

If a very talented or able person does something others do not immediately understand, it often can be assumed that this action merely expresses the underlying talent or ability.

Let’s suppose you are competing with an opponent you consider superior. This person makes a move or a decision that you cannot figure out. Are you going to assume it was clever or an error? I think most tend for the former because the opponent's superiority makes it seem implausible that the opponent would commit an error.

The argument for assuming a smart person always makes smart decisions is a good one - a smart person makes smart decisions! However, you must believe in yourself, your knowledge, and your instincts ahead of another’s reputation, title, or authority. Otherwise, how do you expect to ever see the truth in any situation, especially when reality runs against your perceptions? How will you ever achieve independence in your thoughts, behaviors, or beliefs?

A bad plan is better than no plan for those who want to learn from their mistakes. Otherwise, at best a person will only become a good improviser.

The most difficult aspect of learning from a mistake is separating a poor decision from bad luck. Those who plan can use hindsight to assess the various outcomes that might have resulted from different choices while those without plans cannot reliably think back to their options and cannot evaluate their decision making process with any rigor.

Narrative fallacies make it difficult to analyze games. A ‘winner made good moves because it led to a win’ does not account for how good the move itself actually was.

Michael Lewis described in Moneyball how baseball teams assessed the value of batting by comparing the result of a hit ball with other hit balls from the history of the sport that traveled with similar velocity and trajectory. If a hit resulted in a double 90% of the time and an out 10% of the time, the batter was credited with 90% of a double (and 10% of an out). This method reduces the impact of single outcomes on an analysis and helps keep the focus on the process instead.

Recent technologies have created a lot of spare time for us that we do not have the sense of purpose needed to make the best use of.

Luckily for us, TV, social media, and this half-assed blog are there to fill the void!

In my dreams I wrote the greatest song I’ve ever written, can’t remember how it goes…

I think it’s important to keep in mind that although the dream of a machine-based intelligence is always presented as a complete positive or a definite negative, the reality is always much more complicated. A computer that was smart enough to build a tree house might be the dream but until we bring Dreamland into reality it probably isn’t a bad idea to know what to do with a hammer and nails.

The devil is always in the details, I suppose…

…what?

Fine…

That’s not from Deep Thinking, that’s from Courtney Barnett’s ‘History Eraser’. But it is a riff off, you know, and who better…

Never mind.

Thanks for reading.

Wednesday, May 22, 2019

reading review - deep thinking (education)

Education is a topic I did not expect Garry Kasparov to discuss in Deep Thinking. However, I thought he made a couple of interesting comments about the matter in the context of how our ongoing shift toward machine intelligence will challenge education to make adjustments and I want to highlight those challenges today.

One observation I agreed with was that kids tend to learn much faster than allowed by traditional education methods. The ongoing process of information becoming increasingly available through new devices is an opportunity for educators to change their teaching methods and take advantage of this observation. If teachers can teach kids how to learn, it gives every student the freedom to learn at his or her own pace using the bounty of new tools and technologies at their fingertips.

The other thought I liked was how the next major innovation in education is likely to come from developing countries. This tends to be true for innovation in general – it comes from places that have no need to maintain a status quo (1). As education systems worldwide consider the ramifications of students having immediate access to all the answers, look for the best new questions to come from the countries that have no commitment to the questions they are asking of their students today.

Footnotes / money, money, money…

1. M-Pesa basically means texting money

This reminded me in a way of how M-Pesa is a popular way of transferring money in many countries yet it seems like Venmo is the closest equivalent we’ll ever have in the USA. The difference as it applies to the thought above is that Venmo does a better job of maintaining the status quo in the USA than M-Pesa because it leverages Paypal (an existing method of money transfer) rather than text messaging (which would be a new concept in the USA).

Monday, May 20, 2019

reading review - deep thinking (competition)

Garry Kasparov’s Deep Thinking examines the progress being made in the world of computing and considers how these advancements will alter the course of the human race. It’s a compelling read because of Kasparov’s significant experiences within the field and I’ve noted a few of his insights in recent posts.

However, today I want to focus on a different aspect of his book – competition. As a former world chess champion, Kasparov is in some ways uniquely positioned to discuss the nuances of competition and I thought many of his comments reflected a deep understanding of the competitor’s mindset.

One of Kasparov’s main strategies in any match was to prevent the opponent from playing to his, her, or – in the case of IBM’s Deep Blue, its – strengths. This is a concept I’ve heard echoed over the years by competitors in all manners of fields. Michael Lombardi describes this in the context of helmet football as making right-handed opponents play ‘left-handed’ while Paul Graham has written that small companies can beat larger companies if they make their bulkier competitors ‘run uphill’. What Kasparov’s writing lacked in colorful analogies was more than made up for by direct tactical recommendations – if an opponent was good with the bishops, he would congest the center to reduce the possibility of diagonal movements.

This thought leads naturally to its inverse – a good player cannot maintain success unless there is constant variation in play. A good player becomes a great player by becoming unpredictable because an opponent cannot easily identify which strengths to nullify. To recycle the above example in this context, the strategy of nullifying or removing the bishops is less effective if the opponent is comfortable playing a closed game that emphasizes the knights.

A strong competitor also understands the difference between a true weakness and a theoretical weakness. A good player either wastes training time by futilely working on all weaknesses or takes no action to protect against having weaknesses exploited by crafty opponents. Such a player’s discomfort with weakness means too much or too little preparation time is spent considering how to best prevent weaknesses from leading to a defeat. A great player understands that a theoretical weakness is irrelevant if an opponent cannot exploit it and only worries about the true weaknesses that an opponent is able to take advantage of during competition.

The observation I liked the most was that some competitors make for better challengers than they do champions. There are undoubtedly many explanations for this – perhaps some will cite the ‘underdog mentality’, which at the very least makes for a nice newspaper story. But the explanation I like the most takes a slightly different angle – when winners credit their own good play ahead of their opponent’s poor play, success can be the enemy of future success. Although this isn’t necessarily how every new champion views the accomplishment, I suspect the ones who do struggle to remain at the top are at some level too convinced of their own superiority to treat challengers with the same respect that they once gave to the champions they worked tirelessly to dethrone.

Saturday, May 18, 2019

reading review - deep thinking (change, progress, and innovation)

One of the major themes in Garry Kasparov’s Deep Thinking was the way society struggles to find the right pace for change. It’s a struggle most noticeable anytime we lament technology taking on our work – in other words, it is a constant and ever-present protest about what has been among the most basic and repeated stories from the history of civilization.

Perhaps this speaks to the mere power of the status quo, especially for those who would benefit from keeping things unchanged. As Kasparov points out, a gravedigger would have selfish reasons for worrying about new breakthroughs in medicine just as a mosquito net manufacturer surely does better business in the absence of a malaria cure. There is also a good example from the book about how the status quo can excuse us from facing a certain fear – until a 1945 work stoppage by elevator operators forced many to scale skyscrapers by foot, the general mood in society was apprehensive toward riding in an elevator alone. Therefore, the last true obstacle to the full adoption of automatic elevators was the public's reluctance to make the change.

The right pace for change is a dilemma that defines the implementation of any new innovation. The four decades it took for automatic elevators to catch on is perhaps a bit long in hindsight but there is always good reason to be skeptical about how much can change right away. There is a reality Bill Gates once pointed out that applies to almost all new technologies - progress forecasts for the first couple of years are almost always exaggerated while the potential benefits of what might happen as new users adopt the technology are rarely given full weight. The struggle between idea and breakthrough then is finding the balance between the champions who overestimate short-term progress and the opponents who underestimate the long-term gains.

Of course, moving slowly isn’t a guarantee of anything. Those who only rely on optimization to bring on improvement can obscure the need for a more thorough rebuilding or rewriting of the existing method. If the rate of change is too slow, the potential of creation through destruction is exchanged for the surefire but perhaps limited improvements that might come about from a committed optimization method – evolution is, after all, merely change, and no guarantee of improvement.

Thursday, May 16, 2019

reading review - deep thinking

Deep Thinking by Gary Kasparov (July 2018)

Russian chess champion Garry Kasparov explores the history, progress, and implications of machine intelligence in his 2017 book. Deep Thinking covers a wide range of topics related to the idea and I’ll cover some of Kasparov’s varied insights in a series of upcoming posts. However, for today I want to cover the section of the book he devoted to the chess matches he played during the 1990’s against a number of chess computers.

The most famous of these matches, of course, was against IBM’s Deep Blue. The machine quickly progressed during the decade from being a technically impressive but unthreatening imitation of an elite chess player to a true challenger against the world’s best player. The machine finally broke through in 1997 by beating Kasparov in the first game of their six-game match. A year later, it made history by beating Kasaprov in the rematch.

The progress made by the program illustrates many of the important principles that govern machine progress. One example is Moravec’s paradox. This points out that humans are bad at what machines do well while machines are good at what humans do poorly. This suggests that a machine in competition with a human will win as long as it can fully take advantage of its superior processing power. In the chess context, a machine can make up for its shortcomings in strategic planning and patter recognition by analyzing positions at a depth far beyond the ability of a human player. Early programmers of chess machines struggled with this trade-off by focusing too much on teaching the machine to ‘think’ like a human. Over time, as computer speed steadily improved and processing power dramatically increased, the focus of chess machines turned to using brute force to analyze as many positions as possible instead of trying to ‘think’ through a position.

To put this point in another way, machines have historically failed to ‘think’ like humans because the way humans think is not understood well enough to turn into a computer program. The solution for designers has always been to prioritize results over method instead. In the context of chess, the breakthrough came when the programs started to focus on calculation rather than thinking.

It helped computers that chess is simply not complex enough to require a computer to ‘think’ – brute force methods were enough to determine the best play. There is no better example of the power of extensive tactical search than in situations with low margins for error. A human in this situation will almost always make an error at some point because he or she cannot rely on intuition, principles, or experience to guide the thought process. A human might also feel pressure, nerves, or emotions that influence bad decisions. On the other hand, a computer is unaffected by feelings and can navigate such situations with the same process it uses for common positions.

A human, however, is often better equipped to navigate new or novel positions. In these moments, understanding the basic principles of the game and playing the board based on intuition works better than brute force calculations. This is a reality reflected not just on the chessboard but also in any domain where machines are prevalent. In short, automated equipment simply isn’t very flexible and humans in competition with machines can win if they can introduce uncertain elements onto the playing field whenever possible.

One up: I liked the observation that airplanes don’t flap their wings to fly. It makes the point that machine success isn’t dependent on following the blueprints set by living things and I suspect this lesson is likely to hold true even as computers continue to expand and build on the early foundations of artificial intelligence.

One down: One common form of machine learning involves feeding a computer endless examples of a desired behavior. Over time, the computer learns what a correct result looks like and tries to mimic these results with its decisions. In chess, this can lead to weird results – a computer might think queen sacrifices are a good idea in general, for example, when in reality a player only sacrifices a queen when he or she has an exceedingly good reason to do so.

The logic of this method must be applied carefully because the way this lesson applies to other situations can lead to very poor ‘automated’ decision making. A computer learning to drive in this manner, for example, might observe driving behavior and conclude that a green light means go, a red light means stop, and a yellow light means… accelerate through the intersection!

Just saying: I thought no point summarized the chess portion of this book better than the observation that computer programs are better at using knights than their human opponents. The reason is two-fold. First, humans struggle to visualize the crooked movement of a knight with the same ease they visualize the linear movements of the other pieces.

I liked the second reason a little better – computers do not struggle with this visualization because computers don’t visualize anything. I think this point is easily lost whenever people try to think about how computers make calculations. It isn’t really a question of how the computer ‘visualizes’ the problem because a visualization is always a substitute for rigorous calculation. A computer doesn’t need to visualize because it is almost always capable of completing the full series of calculations.