NORAA | Machinic Doodles

NORAA | Machinic Doodles
Artificial Intelligence
Jessica In
While simple pen strokes may not resemble reality as captured by more sophisticated visual representations, they do tell us something about how people represent and reconstruct the world around them. The ability to immediately recognise and depict objects and even emotions from a few marks, strokes and lines, is something that humans learn as children. Machinic Doodles is a live, interactive drawing installation that facilitates collaboration between a human and a robot named NORAA, a machine that is learning how to draw. It explores how we communicate ideas through the strokes of a drawing, and how might a machine also be taught to draw through learning, instead of via pre-programmed, explicit instruction. The installation is essentially a game of human-robot Pictionary: you draw, the machine takes a guess, and then draws something back in response. The drawing game based on a recurrent neural network, combined with real-time human drawing interaction, generates a sequence of human-machine ‘doodle’ drawings. As the number of classification models is greater than the generational models (i.e. ability to classify is higher than drawing ability), the work inherently explores this gap in the machine’s knowledge, as well as the creative possibilities afforded by the misinterpretations of the machine. Drawings are not just for guessing, but analysed for spatial and temporal characteristics to inform drawing generation. An interesting aspect of the work is that NORAA is visually ‘blind’ in the sense that there is no camera, no image analysis – the drawings are encoded by the machine through a movement and sequence-based approach. Motor angles are translated to stroke data through the kinematics of the mechanism, before being sent to the drawing classifier. In the case of no direct match between classification and generation, an alternative generative model based on the ‘next best guess’ is chosen and drawn by the machine.