Login to participate
  
Register   Lost ID/password?

Louis Kessler's Behold Blog

Chess and Artificial Intelligence: The Future Changed Today - 6 days ago

The shocking and unbelievable news hit my inbox today, and I just couldn’t be more amazed by what has just happened and what the future effects will be.

Today it was announced that DeepMind, a company formed in 2010 and purchased by Google in 2014 to investigate the possibilities of using neural networks for machine learning, has created a program that given just the rules of a game, can play itself and learn and reach champion levels.

Everyone remembers Kasparov being defeated by the chess program Deep Blue in 1997. Go is a tougher game for computers. In March 2016, DeepMind created a program called AlphaGo, that defeated the world champion Lee Sedol.

But in just a year and a half since then, in October 2017, the algorithm was able to achieved superhuman performance in the game of Go using only 8 hours of training:

The next month, the algorithm achieved superhuman performance in the complex Japanese board game of shogi with 2 hours of training, and then it trounced Stockfish, one of the top computer chess programs in the world after just 4 hours of training.

AlphaZero played 100 games of chess with Stockfish and as white, scored 25 wins, 25 draws and no losses. As black, it scored 3 wins, 47 draws and no losses. In so doing, AlphaZero played classic human openings with no opening knowledge programmed into it. It had no endgame database. It had no heuristics. It learned everything itself.

Long, long ago, when I was a student at the University of Manitoba, I had a hobby I had dabbled in: programming a computer to play chess. I had reached a point where my program, Brute Force, was then one of the best in the world. I went to Seattle, Washington in 1977 for the 8th North American Computer Chess Championship, and followed that up in 1978 in Washington, D.C. for the 9th NACCC. (If you’re interested, see my writeup on my chess program, Brute Force).  

The program was called Brute Force because I concentrated on doing the minimum possible to evaluate positions, and simply let the program iterate as many moves as possible to determine the best move. I had the full use of the University of Manitoba’s IBM 370/168 mainframe, which likely was as powerful then as your smartphone is today. Smartphones today can play better chess than the big computers did back then in the Computer Chess Championships of the ‘70s.

Soon after, my programming interests switched to genealogy, but I remained interested in and followed the advances in computer chess and artificial intelligence in general. The best computer chess engines today, Houdini, Komodo and Stockfish, play at an ELO rating of 3400. The best chess player in the world today is Magnus Carlsen of Norway rated at about 2840. The chance of Magnus defeating one of these programs today is the same as a person rated 2280, say the 6000’th best chess player in the world, beating Magnus.

Stockfish and these other programs have been worked on continuously for a long time. They encompass 20 years of improvements in computer speed, better chess-specific algorithms, and inclusion and refinement of chess ideas since Deep Blue of 1997.

Yet, in only four hours of training, this program AlphaZero, defeated Stockfish handily.

But that’s not what is most amazing.

What is most amazing is that AlphaZero trained using a general game-playing learning algorithm that had zero chess-specific knowledge, other than the rules of how the pieces could move and what was a win, loss and draw.

See this Chess24 article by Colin McGourty for the chess viewpoint on this, along and a link to the paper written yesterday (Dec 5) on this breakthrough:

 

What Has Just Happened?

DeepMind has developed a general method where a machine can learn how to gain world-level expertise in any task once the rules are set out.

This will change everything. Think more than just board games. Think bigger.

Not long ago, I saw this tweet with a video of a robot:

Imagine now what would happen if we could program the “rules” of walking, and gravity and motion into it, and throw it through this learning algorithm. You would likely end up with a robot that could run, jump, swim, whatever at hundreds of miles per hour without disturbing anything in its path. (Notice the text of the tweet: “we dead”)

Let’s go further: Computer voice recognition. It’s terrible right now. It needs training and still makes lots of mistakes. This algorithm will make mincemeat of that problem.

Language translation: So easy now.

Computer vision, hearing, tasting, smelling. Now doable.

Artificial intelligence … Now it’s very scary. We had better do this right.

That’s enough. Time to breathe again and go back to genealogy programming.

A Year of “Retirement” - Fri, 24 Nov 2017

One year ago today, I celebrated my 60th birthday and on the same day I retired from my job of 41 years. Since then, it’s been quite a year. I wonder how I ever had time for a day job.

I recently got back from my 4th genealogy conference of the year. In February, I was at RootsTech in Salt Lake City, July at IAJGS at Disney World Florida, October at the Great Canadian Genealogical Summit in Halifax and I just returned from FTDNA’s Genetic Genealogy Conference in Houston.

Add into that two vacations, one a Western Caribbean cruise with my wife following RootsTech in February, and the second was a two-week family trip to Disney World Florida in June with my wife and older daughter while my younger daughter was working at Disney World for the summer.

The year started off with a “bang” after I had torn my peroneal tendon away from my right ankle while playing squash. I had the operation to reattach it on December 30. The next 5 weeks were on crutches and the 5 weeks following were in a walking boot, which I wore to RootsTech and on my cruise.

That marked the end of my 30 years of playing squash. I quit football at 30, running at 40, tennis at 50, and squash at 60. If I could last at biking until 70, swimming until 80 and walking until 90, I’ll have done pretty well.

I’ve been wearing the FitBit my work-people bought me for my retirement for a full year now. I average 6 hours of sleep a night, not counting about 1 hour awake a night, which I was surprised to learn is less than normal – according to FitBit, the average man my age is awake 15-31% of the night. I also learned that 10,000 steps a day is a really good goal and is not always easy to achieve. At 1,000 steps in 10 minutes, that’s 100 minutes. I learned that a day at a genealogy conference is in excess of 10,000 steps, and a day at Disney World is more than 20,000 steps. I have a few “FitBit Friends”, but it’s Carole Steers from England who I met at RootsTech who blows me away in steps every day. I don’t know how she does it.

On the programming front, I came out with a minor version of Behold in January. I’ve made progress towards what will be the last release before I add editing, and I’m close but haven’t yet had the time to finish it off.

One of the reasons for Behold’s slowdown is Double Match Triangulator. My highlight of the year was having the program win 3rd place in the RootsTech Innovator Showdown. In March I created a website for DMT and started selling it. I’ve released 5 updates to DMT in the past year and I’m currently working on a major update to be released very soon.

DNA analysis is so interesting and there is so much you can do with the data that nobody’s even thought about yet. I’ve done lots of analysis, written many blogs posts, given talks, and tried everything to confirm my genealogical relationship with even some of my DNA relatives (still unsuccessful, but it will eventually happen). Also enjoyed my first Twitter #genchatDNA a week ago. One of my tweets summarizes everything: 

I’ve also got a bit more active on some of the genealogy DNA Facebook groups, which have become very popular with tens of thousands of participants including many of the experts.

And let me thank several dozens of my Facebook friends who wished me a happy birthday on my Facebook timeline today, and also my Australian friends who who used their time-zone advantage to start the party early.

Quite a year. Still lots more to look forward to in the years ahead.

Another Estimate of Speed and Balding Figure 2B - Wed, 15 Nov 2017

Ten days ago, I produced an article: Revisiting Speed and Balding, where I tried to duplicate the results of their Figure 2B. I posted a link to the article on the ISOGG Facebook group, and received a lot of comments, mostly from Andrew Millard and Debbie Kennett. Debbie also provided quite a few comments directly on my post and contacted one of the authors, Doug Speed, who then also commented on my post. He indicated he’s confident his simulation results were accurately tabulated and suggested “the differences come from us asking slightly different questions”. I’m trying to answer the question:

For a match at a specific segment length, what is the probability that the segment comes from a particular generation?

That’s what I thought Speed and Balding were answering as well, so I’m unclear as to what the difference might be.

I thought one possible difference might be that I’m taking this from the perspective of the matches in your match list at Family Tree DNA. Debbie Kennett rightly pointed out that the inclusion of only DNA matches would only affect very small segments under 9 cM, since at least one segment of 9 cM or more (or 7.69 cM plus a minimum 20 cM total) is required before the person is considered a match. So that is not a question difference that would have affected the 10 to 40 Mb range where my statistical numbers significantly differ from their simulation numbers.

David Millard rightly pointed out that I was one generation off in my Expected number of cousins, but that wouldn’t change my results much. He also didn’t think that my figure titled: “Addition of Inverse IBD Region Length Distributions” was close to Speed and Balding’s Figure 2B, but that fact of the matter was that no matter what reasonable methodologies I could think of trying, that was the closest I could get to Speed and Balding’s result.

So I do not agree with Speed and Balding’s figure 2B. It would be nice to see if anyone else has done some similar calculations and compare.

In one of Debbie Kennett’s comments during our discussions, Debbie provided a link to an article that gave some data that looked like it could be used to do a third estimation of what Speed and Balding’s Figure 2B might be. The article is by Bob Jenkins and is titled: How many genetic ancestors do you have?

Genetic ancestors don’t help us that much, but Jenkins goes on to then estimate the number of cousins by generation by segment length. He gives one table for females and one for males. They are fairly similar but the male table has a few inconsistencies that the female doesn’t, so I’ll just use the female table. Bob Jenkin’s table looks like this:

Bob Jenkins table

And it goes all the way to 100 generations. Let’s interpret this. Pick 4th cousins.

That line says it’s generation 9, but this is counting every step up and down. Translating that to Speed and Balding’s value of G would make it G=5.

Then we see “6:5”. That means 5 cousins would have 1 / (2**6) of the DNA of the ancestor, which is 0.015625, which multiplied by 6800 total cM  gives 106 cM, or multiplied by 5334 total Mb gives 83 Mb.

Then we see “7:45”. That means 45 cousins would have 1 / (2**7) of the DNA of the ancestor which is 53 cM or 42 Mb.

Etc.

So we now put this all into a spreadsheet:

Number of detectable cousins (Jenkins)

and we divide by the column total to get the likelihoods:

Fraction of detectable cousins (Jenkins)

Plotting this in the Speed and Balding manner gives:

Fraction of detectable cousins (Jenkins)

Bob Jenkins does not give the same region lengths as Speed Balding. Jenkins uses region lengths that double, so we have to be careful in our comparison. Let’s compare Jenkin’s 3 Mb with SB’s 2-4, Jenkin’s 5 Mb with SB’s 5-9, 10 with 10-19, 21 with 20-29 and 42 with 40-49.

Here is Jenkins above lined up for comparison:

Jenkins lined up

Below are my final calculations from my article 10 days ago that were calculated using Speed Balding’s values for the probability of region length and the ISOGG table for number of cousins which I then extended until it reached the world population:

image

Note that Jenkins has no visible G>20 (very light blue) at 10 Mb and 21 Mb which agrees with what I came out with.

Compare this with Speed and Balding for the equivalent segments. Look at how much G>20 (the grey shade) there is between 5 and 20 Mb. That is the part that I cannot believe is reasonable, and neither can Jenkins.Speed and Balding lined up

For the smaller regions, under 10 Mb, Jenkins does include a significant amount of G>20 segments (very light blue). But when you look at your match list, those are not included because a match of 20 generations or more with almost always match on just one segment. And if that segment is 9 cM or less, then it won’t be considered a match at Family Tree DNA and won’t show up in your match list. My results don’t show any G>20 for any segment length, but neither Speed Balding nor Jenkins do and both of them show many G>20 for small segments smaller than 10 Mb.

The bottom line is that I don’t believe that Speed and Balding’s Figure 2B is appropriate to apply to the segment lengths of the matches in your match list. There is something undetermined that they don’t take into account.

Conclusion: Almost all your matching segments with any of your matches at any segment length will be within 20 generations. Small segments under your DNA company’s minimum match limit (e.g. 9 cM at FTDNA) will also be within 20 generations because people with segments that small from more than 20 generations back will not be in your match list.

The ISOGG Facebook group is a closed group, but if you have been given access to it, the comments there about this article are a worthwhile read.