I'm so glad to see others working on this. I've attempted it too, but not with any measure of success.
Trying to trade space for time, I used a model that gives every cell a set of all 512 of the possible 3x3 neighborhoods that could have caused that cell's present state ("alibis"). It then goes to each cell, comparing its alibis to those of neighboring cells and eliminating mutually impossible ones from either set. This step has to be repeated until no more alibis are shed in a pass.
When it finally stabilizes, the model is a solution kernel that can then field further manual guesses against it. If a cell's alibis all agree it was dead in the "before", there's no need to guess, but if they're not unanimous, what if we hazard a guess one way or the other for a bit? How does that ripple through the rest of the board? If any of the cells ran completely out of alibis given a certain guess, that guess was clearly not a proper solution, and it's time to back out and try a different one. If there's no solution at all, that's a Garden of Eden.
Ultimately I wanted to generate not just one solution, but all the solutions for a given board. I got stumped because I wasn't convinced I wasn't still working in 2**(n*m) time or worse trying guesses against the kernel.
It's a really fascinating problem, so much so that I even made a pico8 game about it years ago! Even the 6x6 grids are really tough!
Inspired by this, I stayed up all night building this!
https://jsbin.com/josuxarufu/1/edit?js,console,output
It tries to "evolve" the target image (right) by making random changes to the origin (left) -- output after 1 step is in center.
It's eventually supposed to be a genetic algorithm, but for now it just goes through it pixel by pixel, and if the pixel doesn't match the target, it inverts one of its neighbors. If that's beneficial (checking loss in a 5x5 rect centered on the test pixel), we keep the change. I get loss down to about 25% then it plateaus.
Feel free to fiddle, fork, etc! (I stayed up pretty late so the code is bad! Suggestions welcome!)
Few days ago I posted this: https://news.ycombinator.com/item?id=41743887
Atavising was new to me. From https://nbickford.wordpress.com/2012/04/15/reversing-the-gam... :
> First of all, while I said “Predecessorifier” in the talk, “Ataviser” seems to be the accepted word, coming from “Atavism”, which the online Merriam-Webster dictionary defines as “recurrence of or reversion to a past style, manner, outlook, approach, or activity”.
I saw an interesting video on the same topic last week. By Brian Haidet (AlphaPhoenix) on reversing the GOL - it turns out that NP hard is hard! https://www.youtube.com/watch?v=g8pjrVbdafY
"its like showing a solved rubiks cube and asking what the scramble was"
^ this analogy may be the best I've seen in a long time.There's a continuous GoL called LENIA.
https://www.youtube.com/watch?v=PlzV4aJ7iMI (in French)
I made a related crossword puzzle. You can find it here if you want to give it a try! https://jacobw.xyz/projects/crossword/
It's always fun when people use autodiff in packages like PyTorch for completely unrelated usecases :)
Really cool to see. Was curious about the author's other work and was very annoyed that the blog does not link to the author's other posts, either with a "next/previous" link, an archive, or a homepage where a list of posts can be viewed. The only way to see other posts is to subscribe to the RSS. I can't imagine there being a good reason for having such a bad UX.
The article asks the following:
> I can’t help but wonder if there’s a reaction-diffusion-model-esque effect at work here as well
There are continuous approximations of the game of life that show this, for example this implantation:
Feels to me like there is no need for backpropagation. I think you can just iteratively grab a random pixel and flip it of that would bring you closer to the target after one step.
It would probably work even better if you tweak the loss function with some kind of averaging/blurring filter.
Related: https://a.tulv.in/finding-mona-lisa-in-the-game-of-life.html
Albeit, using a different stochastic technique
I feel like just doing simulated annealing on the starting grid would work better and be faster to implement?
(Not saying the goal was working well and being fast to implement.)
I found this post absolutely fascinating! I got thinking, is there a Turing-complete simple abstraction that is differentiable, without approximation? I have heard about Neural Turing machines (but only as a layman), but from a quick peek they are similarly hard to reason about as other NNs?
Interesting. I can't zoom on mobile which is frustrating.
The emerging pattern looks a little like Blue Noise!
Seeing alot of parallels to the article by Stephen Wolfram posted a few days back on the fundamental nature of time .
Very cool!
[dead]
[dead]
[dead]
The objective function here defines a Markov random field (MRF) with boolean random variables and certain local statistics of nearest neighbours, either uniform if the target is a white image, or varying with location to produce an image. MRFs define Gibbs probability distributions, which you can sample from (which will already produce a good image here) or perform gradient ascent on to reach a local maxima. The negative log-likelihood of the MRF distribution is equal to the loss function of the original optimisation problem, so the maximum likelihood estimate (MLE) (there will often be multiple due to symmetry) of the MRF is the optimal solution(s) to the original problem. (But in general the MLE can look completely different to a sample.)
The statistics are 9th-order (of 3x3 blocks of pixels) but of a simple form which are hardly more expressive than 2nd-order nearest neighbour statistics (in terms of the different textures that they can reproduce) which are well known. In the approximate case where you only care about the average value of each pixel I think it would collapse to 2nd-order. Texture synthesis with MRFs with local statistics is discretized (in space) Turing reaction-diffusion. I did my PhD on this topic.
Probably the most influential early paper on this kind of simple texture model, where you will see similar patterns, is:
Cross & Jain, 1983, PAMI, Markov Random Field Texture Models