Full-Stack Development (over Christmas and during lockdown).
The Idea:
To mesh Art with Technology.
We all post pictures to social media accounts. Have you ever thought perhaps one of your pictures might make for an interesting painting?
That’s the idea behind https://www.artmart.ie it is an application that paints your digital photographs.
It pulls together a lot of the different tech ideas, tools and languages that I have used, some more than others. The list of languages used is {HTML, css, javascript (both front-end and nodejs), python, bash}, I gravitated to using vi exclusively as editor of choice (it was developed in the 1970s…), admittedly the options in vi for finding valid css property values are limited....
I “discovered” one new wonderful piece of tech called Node Red (nodered), I love it. It came out of IBM Emerging Technology, in the U.K., it rocks.
This is the landing page for www.artmart.ie, it contains an image that illustrates the process.
The Technical Stack
AWS hosts https web-app (Cloud Side):
- Apache Web Server. Linux
- Nodejs.
- HTML.
- CSS.
- Javascript Front-end.
- Bash script.
- S.E.O.
Registration and maintenance of domain name. I used a dot ie domain, it was cheap from hostingIreland.ie.
The basic workflow is the user picks a picture they would like to be painted. They upload the picture along with some basic contact details using the web-site. That information stays on the webserver until it is downloaded to the edge, whereupon a simple bash script deletes the processed information from the AWS server.
At the Edge:
- bash script to pull down the data from AWS. (cron job)
- Python scripts to convert the image files from pixels to GRBL commands
- Python script to feed commands to robot to paint the picture.
- A nodered workflow to automate processing of image files, and generate notifications.
At the edge, there is a cron-job scheduled to use rsync to collect all of the new data and copy it to the Raspberry Pi.
The nodered workflow contains a watch node. Whenever new files arrive it picks those up and starts the process of converting the image into a series of GRBL commands that can be sent to the robot to paint. The nodered also fires off an email notification to alert me that a new picture has arrived.
Each colour/color is painted in sequence, so the process of swapping out the different colours is a manual process, and by extension submitting the python script that paints a specific color is also done manually at the command line.
Hardware:
- Raspberry pi 4.
- Eleksmaker draw robot. (This arrived as a box of parts, I took my time to assemble them, the instructions to assemble I found on the web!)
- Acrylic paints (various used, Posca marker acrylics by Mitsubishi have to date been most reliable).
- Canvas 10in X 10in or A4 paper.
Results:
This is an example of some of the results. In some instances the paintings can seem pixelated. I'm not so keen on this, and there are a number of possible remedies.
(Side-bar): That's me kayaking to Skellig Michael mid-summer 2018.
Technical challenges
In many respects the software components were the easy part. I could probably go over each of those in later blog postings. The front-end is quite rudimentary, it's really just a POC. I used Promises in Javascript quite a lot, they made reasoning about asynchronous tasks easier. The project didn't warrant using a JS framework like Angular, it's too small.
I spent a good deal of time trying to change the style of painting, rather than end-to-end linear, I wrote some code to paint by area, blobs of continuous colour. I thought this would be some sort of DFS algorithm, and that's how I approached it, recursively computing a path, but in fact, the problem is not so much to find a path but to construct a maze from a collection of adjacent points. I'm still not quite sure where the failure was in the code, or indeed if it is a problem with the calibration of the stepper motors in the robot, I can't discount that either. I intend to revisit this in the future.
I got caught out with rsync, or rather the watch node in node-red did not register when the files arrived, so I had to include a work-around to "touch" the files after they were synched to the file system. During the testing phase, I used "touch" to mimic file creation, but then when I did end-to-end testing which used rsync the workflow never ran.
The biggest challenge is probably with the materials and hardware. Initially I wanted to use actual paint-brushes, and I set up a paint-well, the idea was after a set amount of time the robot would top-up the brush with paint from the well, but I soon discovered this was impractical, it was too slow. The servo-motor that actuates the brush costs all of 2 dollars, it is the ubiquitous MG90S, I knew when I was assembling it, that it wouldn't last, and sure it came to pass, I've ordered a number of replacements.
SEO. This I must admit it something of a mystery. The website does not appear on any search results. I did include the meta data tags in the web-site, and followed the recipe for registering it, I even got an invitation to spend money with google to advertise it (I passed on the offer).
Conclusion:
It was fun to build this, do proper full-stack and robotics work. It's a great way to learn and sharpen skills. The limitations on this are more in the area of available paints, and canvas sizes, and dodgy servo-motors.
Future Work:
The next item on the agenda is to replace the broken servo motor. I ordered a (several) replacement part today. Utilize node-red more, there is scope to email more information about the volume of colours per image. I will continue with this project.
If anyone else is using node-red I'd love to hear what your use-cases are