This is a fascinating story. I’m not familiar with the game show in question but someone hacked it and won a lot of money because he noticed that the game’s supposedly random element was actually very predictable.
This is a familiar, systemic security problem, similar to one I’m dealing with at the moment. The people who designed the game assumed for no good reason that it was sufficiently random. That is, they assumed nobody would notice that the random element was in fact a simple repeating sequence, possibly because they expected people to play the game within the rules – including the implicit rules they hadn’t thought about – rather than thinking about how people might see the game from outside with the goal of making money rather than playing the game.
Many security problems have this signature; it’s natural for us to build systems around what people are supposed to do. That’s why it’s so entertaining when people break those systems by doing something unexpected. I’m the sort of person who instinctively seeks out the places where things break. You’d think that would be an advantage in the security community but it usually isn’t: typically nobody wants to know.
This is a problem I’m dealing with at the moment. My client wants me to produce a system as proof of concept which doesn’t prove any concepts and actively ignores the fundamental security problem it creates. I’ve explained how the security problem can be solved, but the client isn’t interested, insisting instead that it’s a proof of concept and we can solve the security problems later. I cannot do this; whatever I have that passes for ethics knows that proofs of concept ignoring inconvenient security problems end up in software nine times out of ten. I won’t do it.
Hack the world, everyone. Find out what things people haven’t thought of and play with them. Don’t steal and don’t fuck with people’s privacy, but fuck with things that fuck with people’s privacy. Think laterally, don’t brute force it unless you have no other option.