Make    Stuff
Pure CSS Images
For some reason
Twitter Competition Bot
I spent a short amount of time writing a fairly crude Twitter bot in Python, using Tweepy. It listens to a stream of tweets, looking for ones with keywords like 'win', 'RT', and 'fav'. After passing through a couple of filters to try and determine if this is a genuine competition tweet, it will follow, favourite, and retweet.
It's been running for just over a week now and I've won a few things, most of them digital so far, and all of them pretty much worthless. Included in my winngs are: A random Fortnite account, a cupcake from a stand in Manchester, Grave Danger for PS4, Fear the Wolves beta key on steam, the book 'Fliers' by Laura Mae, and an entire £0.05 sent to me on Paypal.
He said it was meant to be £0.10 but the Paypal fees were too large.
First Internship Experiences
I'm two weeks into my first internship and it has been great. The first thing that struck me was how little time you have during the week when you're working 9 to 5. During term time at uni there are generally a couple of hours of lectures and maybe a couple more of supervisions every day, so it's quite a shock finding that I've got a few precious hours of free time every evening, especially since an early bedtime is needed for the early start. I'm enjoying cycling to work, it's two 12km trips every day and I'm lucky to have a shower at work so that I can cycle in kit, temperatures are reaching around 30deg at the moment.
There's so much that I've needed to learn, using React, tornado, sass, redis, asyncio, javascript, typescript, using linux, JIRA, and a few other things like using git from the command line. Stack overflow has been essential, as have the two very capable people sitting across the desk from me who are always happy to help me out.
The start-up is based in a beautiful listed old mill building by the river Cam. You can go for a swim if you feel lucky, the Cam can give you Wiles disease. There's a slackline between two trees which I tried for the first time today. Nothing has made me realise how terrible my balance is more than this. It's the walking forwards that's tricky. I feel like this scenic riverside farmland-surrounded building is atypical of tech internship locations, and it's definitely a good thing. From the outside this looks like a rural farm building, then you step inside and it's a hive of technology, the juxtaposition of old and new makes for an interesting place to work.
My New MicroKorg
I picked up a MicroKorg for a very reasonable price and haven't been able to stop playing with it since. As a classically trained pianist I always used to (internally) turn my nose up at these sorts of machines - how could you ever make something interesting with so few keys. The MicroKorg has 32 small keys, and it was initially quite difficult to get used to their size and hit notes accurately, but after spending time with it I'm loving it.
I play in a funk band and use a Wii Rockband controller as a wireless keytar (don't laugh), it's actually a great Midi controller and the wireless feature is a plus because I can wander around the stage. Anyway the first time I used it in a performance, I did my solo on the 32 key keytar instead of my usual 88 key keyboard, and the restriction, instead of limiting my options, opened up a world of opportunity. Normally with these solos I fumble about the blues scale using vague, fast runs to compensate for lack of imagination, but when confined to a small set of keys I came up with something new and interesting.
I think something similar happens when I play the MicroKorg, its limitation is not actually a limitation, but an opportunity. Plus is has a huge range of sounds, being a digital synthesizer. You can combine various oscillators, filters, envelopes, LFOs, and much more to create a massive variety of noises, from classic synth sounds to Hammond Organ-esque tones. The vocoder has huge potential, although I haven't used it much yet as I don't have the microphone.
This inspires me to try and make an analogue synth (the MicroKorg is digital) over the Summer.
Experimenting with Edge Detection
I'm experimenting with simple edge detection in Java. The result above is from an incredibly simple algorithm which subtracts the highest contrast neighbouring pixel from each pixel in the image: for each pixel, find the neighbour with the highest contrast relative to the original pixel and subtract its value from the original pixel. Pixels surrounded by similarly valued pixels will get a value of 0 whereas pixels with high contrast neighbours will get a higher value.
Where next? There's a lot that can be done to improve this simple algorithm, starting with pre-processing like increasing the contrast on the original image. I'd eventually like to turn this into a 'Camscanner' style application, which transforms an image of a document into an upright rectangular viewing format. To do this I'd need to find the four edges of the largest rectangle in the image (very likely to be the document), then use a simple transformation to do the rest of the work.
Adventures with Markov Chains
A Markov chain is an incredibly simple model describing a sequence of events in which the probability of each event depends only on the state attained in the previous event. And yes I did lift that from Wikipedia but it can explain it much better than I. Basically the idea is that you have a number of states, like whether it's raining or not, and the probability of a certain state tomorrow only depends on the state today. In other words the probability of it raining tomorrow depends on whether it's raining today.
I've used this model to generate text in particular styles. You feed the model a huge amount of text, and it builds up a probability table of what the next word may be given the previous word. You then use a random number generator to generate chains of text.
I experimented a bit with order and token choice. I found that training the model on individual letters produced pronouncable garbage. Any words more than a couple of letters long were not real words, it was incomprehensible but pronouncable. If I wanted to make an English-sounding fake language this is probably how I'd do it. I wanted some generated text chains that made sense, however, so I trained it on individual words instead. I found that a first order model (looking back only one word) didn't produce very good sentences, and a third-order model (looking back 3 words) tended to reproduce the input data because I didn't have enough of it. The second-order model seemed like a good middle ground.
I trained the model on a variety of corpuses. Generated wine tasting notes were suprisingly believable, as were generated Trump tweets and dreams. Here are some generated wine-tasting notes:
"This wine combines weight and a low altitude vineyard. It's packed with both power and makes it refreshing too. Drink now."
"A kitchen-sink blend of 90% Sangiovese and the wine is still closed but delivers a creamy, textural wine. Those body powder and lime aromas are less attractive."
"The voluble, opinionated Jean-Luc Colombo has crafted this classic Gewurztraminer. It's plush on the finish."
"Clove aromas are outweighed by a touch of volatility along with dried apricot while the palate shows ripe aromas of sweet oak tones that will age well. Drink from 2016."
My Attempt at a Chess Bot
I made a chess bot in Java. Really I made a variety: randomly moving pieces around, greedily choosing the best next move, and finally using the minimax algorithm with alpha-beta pruning. Above is a short and sweet game between the random bot and the simple greedy one. The biggest challenge here was not the bot itself, but making a chess engine that efficiently generates possible moves, because even with alpha-beta pruning there is a massively fast growth in possible moves as you look more turns ahead. Consider generating moves for White's bishop. Not only do you need to generate the possible board positions, but for each board position you need to check that your King is not in check, or it is an illegal move. To do this you need to scan through the enemy pieces and see if, with the bishop in the new position, taking the white King is legal. It's easy to see how fast this grows as you go down the minimax tree.
I had grand plans to pit the bot against itself over and over again, until I realised that this deterministic machine would result in identical games on subsequent tries.
The result is a chess bot that beats me at chess, although I'm not the best benchmarking tool.
Easy Parallax Stars
I spent a long time messing around with chunks, loading, unloading, pooling, etc. before a friend suggested a much simpler way of getting parallax stars working easily. Generate a load of random points in a square, each with a random z-value. Scale each point's speed according to the z-value to simulate the parralax (this way you can have as many layers of parallax as you want and to add more you can just increase the range of the z-value). Whenever a star leaves the screen, just spawn it in on the opposite side of the square. As long as your square is slightly bigger than the screen size then that's it, amazing-looking amazingly simple parallax stars. If you scale the star speed in the opposite way i.e. closer stars go faster then you get a king of underwater-y effect. Could be a nice look for a diving game.
Make any Image Glow
I wanted to make a glowing 'Jazz Dash' logo for an ad video. The text was transformed quite a bit and I couldn't easily find a built-in feature for that in Gimp, my image software (it's free). Here's a great way to make any image you like glow, with full control over glow size and colour. This even works best for images with a lot of transparency, like my Jazz Dash text.
Create a duplicate layer of your image in the editor of your choice. Change the colour of the background layer if you like, using colour adjustment filters or just the fill bucket. Then apply a gaussian blur to the background layer. Done!