Project Description
Overview
TinyCanvas is a low power handheld image editor that allows users to upload any picture and automatically edit them with an arcade like interface.
TinyCanvas connects to the internet to allow the user to upload an image through a custom TinyCanvas website interface. The image automatically populates onto the OLED display connected to the microcontroller. Once the user’s image is displayed on the microcontroller, they can use the joystick and on‑board button to draw on top of their image. When editing is complete, the edited image can be exported back to the TinyCanvas webpage where the user may download the final edited result.
The core feature of the editing interface is a colored pen overlay which the user can select from eight different color options. The device also supports a shake‑to‑undo feature that erases the last stroke drawn when the device detects a shake. Another feature of the system is image export: when the user finishes editing their photo, a button press exports the edited photo from the microcontroller back to the user’s TinyCanvas webpage.
System Design
Functional Specification
The TinyCanvas device operates as a finite state machine with several high‑level operating states.
- Initialization: Hardware peripherals are initialized including ADC, SPI, I2C, and UART. The device then connects to Wi‑Fi and establishes an MQTT connection with AWS IoT.
- Idle / Waiting for Job: The device subscribes to its MQTT job topic and waits for an incoming editing request.
- Image Download: Upon receiving a job notification, the device downloads the prepared image from S3 using a presigned GET URL.
- Editing Mode: The user edits the image using the joystick and draw button while cursor movement, drawing operations, and undo events are processed in real time.
- Upload Edited Image: When the user presses the export button, the edited image buffer is uploaded to S3 using the presigned PUT URL.
- Return to Idle: After upload completion the device returns to the idle state and waits for the next job.
Key Features
Editing Interface
Image Upload
Users upload a JPG image through the TinyCanvas website which automatically sends the image to the microcontroller for editing.
Joystick Cursor Control
An external analog joystick controls cursor movement on the OLED display using ADC sampling.
Colored Drawing Pen
The user can draw directly on the image using eight selectable colors displayed on the OLED screen.
Shake to Undo
An onboard accelerometer detects a shake gesture and restores the previous image state to undo the last drawing action.
Image Export
When editing is finished, the microcontroller uploads the edited image back to AWS where it can be downloaded from the TinyCanvas webpage.
System Architecture
AWS + Device Pipeline
The TinyCanvas architecture consists of two primary components: an AWS cloud pipeline and the CC3200 microcontroller device. Two Amazon S3 buckets are used within the system. One bucket hosts the static TinyCanvas webpage, while another bucket manages the flow of image processing through several folders.
When a user uploads an image through the webpage, the upload occurs through an API Gateway which triggers a Lambda function to generate a presigned upload URL. The image is then placed into the incoming/ folder of the processing bucket. An S3 event triggers a Lambda processing function which downsamples the image to 128×128 resolution and converts it to RGB565 format suitable for the OLED display.
The processed image is written to the prepared/ folder and an MQTT job notification is sent through AWS IoT Core. The CC3200 device subscribes to this topic and receives the job information including the presigned URLs required to download and upload the image.
The CC3200 downloads the prepared image using HTTPS, performs local editing operations on the RGB565 buffer, and displays the image on the OLED display via SPI. User input is processed through the joystick using ADC, the accelerometer using I2C, and push buttons through GPIO.
Once editing is complete, the modified image is uploaded back to S3 using a presigned PUT URL. A final Lambda function converts the RGB565 image back into a standard PNG image and stores it in the results/ folder so the webpage can display and download the edited image.
Implementation
Hardware and Software
Hardware
The hardware platform is based on the Texas Instruments CC3200 LaunchPad development board. The system uses a 1.5 inch 128×128 SSD1351 OLED display connected via SPI for image rendering. An analog joystick provides cursor movement and drawing control using ADC inputs. An onboard accelerometer connected through I2C detects shake gestures for the undo feature. Two push buttons connected through GPIO control drawing and exporting actions.
Software
The microcontroller software maintains a logical copy of the OLED display in a 2D array buffer representing the canvas. Cursor movement and drawing operations update this buffer while also updating the physical display using graphics functions from the Adafruit_GFX library. Each stroke is stored so the system can revert to the previous canvas state when a shake gesture is detected.
AWS Integration
AWS services including S3, Lambda, API Gateway, and IoT Core coordinate the image processing workflow. Lambda functions generate presigned URLs, process images into RGB565 format, and convert edited images back into PNG format. MQTT communication through AWS IoT Core notifies the device when a new editing job is available.
Challenges
Development Challenges
The largest challenge during development occurred when integrating the AWS microcontroller code with the canvas editing functionality. Both the backend cloud pipeline and the embedded editing interface were fully functional when developed independently on separate boards, but combining them introduced several integration issues.
- Button function conflicts
- Screen initialization issues
- Memory limitations
- TLS return communication errors
One suspected cause of the failures was SRAM limitations on the CC3200. Both independent projects were already near the SRAM capacity limit. To overcome this limitation, the linker configuration file cc3200v1p32.cmd was modified to expand SRAM allocation to approximately 233 KB of usable memory.
Future Work
Possible Improvements
- Replace presigned URLs with job IDs and S3 keys to reduce MQTT packet size and prevent URL expiry during long edits.
- Add automatic Wi‑Fi and MQTT reconnection along with job retry and queue handling for robustness.
- Decouple editing time from cloud timeout constraints using a state‑based upload process.
- Improve device robustness through better power regulation, OLED brightness control, and low‑power Wi‑Fi operation.
Video Demo
Project Demonstration
Team
Project Credits
- Kemma Snyder — Electrical and Computer Engineering, University of California Davis
- John D. Wilson — Electrical and Computer Engineering, University of California Davis
Special thanks to the Electrical and Computer Engineering Department at the University of California, Davis, Professor Soheil Ghiasi, and Teaching Assistant Randall Fowler for their guidance and support throughout this project.