TITLE: Found Object
DESCRIPTION: Working with a negative sheet as our data set, our story is a live performance of two detectives’ search into the story behind these captured images.
1. We were tasked to create a non-fiction narrative about 24 film negatives that were found on the street.
2. After getting them all printed. We took some time to see if we could come up with the story from the negatives.
3. From looking at the photos, we saw that the they were taken on one day at a gathering, possibly a wedding. With this in mind, We decided to tell this one woman’s story since it seems like the camera could have been hers, we call her Andrea:
4. We decided to do a live performance/skit where two detectives are tasked with finding the missing person’s case of Andrea Leandro. She was last scene at this wedding and in the skit we go through the images and slides, and through her life to try to determine what happened to her. We developed her relationships with the individuals in the photos and added some more background information to portray the complete picture of Andrea.
Here are our analysis of the photos and the stories we developed around them.
Here is the script of the detective and how they portray Andrea’s life by trying to figure out what happened to her.
For an interactive component, we handed out this sheet for the viewers to be an active participants in the search for our missing person.
FINAL PROJECT: Mechanical Reproductions
Week 4: Putting the pieces together for the final presentation
1. Painting with matte & metallic acrylic paints
2. Laser cutting last little details
3. Improvising after the laser cutter that you booked for 2 hours is broken and the one working machine is booked for everyone else’s finals and thesis projects. Went to Blick and bought a bunch of wooden dowels and a cutter. Luckily, I had some wood scraps from my mid-term project and some test printed gears, so I was able to scrape together the final details.
4. The moment that it came together, along with an illustration for the inspiration and model.
Figuring out how to make the gears actually work together to create more movement and integrating it with the pipe line.
TITLE: Case by Case
DESCRIPTION: Case by Case, my Nature of Code final project came from the idea to distinguish lowercase letters from uppercase letters for little kids to use. This was inspired by a parent-teacher meeting for my 3-year old. The teacher told me that my child was really good at identifying uppercase letters, but not lower case letters. When we read, we are deciphering many strings of lowercase letters, so I wanted to figure out a fun exercise to help him learn in preparation to learning how to read.
MAIN GOAL: To explore and understand the initial steps of letter and number recognition in a machine learning system using Shiffman’s Neural Network with p5 example of handwritten numbers and applying letters to his sketch.
Continued from last week’s Final Project: Step 1….
1) To distinguish between upper and lowercase letters, I needed to creating the handwritten letter dataset to add to Shiffman’s Neural Network.
NEXT STEPS: To keep exploring this method and eventually build this kid’s app that could not only create testing data from what the child writes, but could also be a fun way for kid’s to practice writing their letters and identifying the letters case by case.
PROCESSING PDF OUTPUT
DESCRIPTION: Data visualization of free wi-fi spots versus telephone booth locations in New York City, Brooklyn, Queens and the Bronx. Data sources came from NYC Open Data.
CLOSEUP OF PROCESSING PDF OUTPUT
BACKGROUND MAPS: ORIGINAL & UPDATED COLORS
PROCESSING & ILLUSTRATOR COMPOSITE
1. Bought Materials: wood, tubing, metallic paints, gesso, 1/2″ diameter copper plumbing pipes, pipe cutter (the best cutting tool ever), marbles
2. Building 12″ x 24″ frame, laser-cutting pieces and figuring out the design to make a marble move through the pipes and system.
1. Finalizing Design.
2. Adding an interactive component to it, like a motion or touch sensor to trigger sounds.
3. Painting and piecing it together.
THE PIECE…so far with 1 week left to finish
TITLE: Pep Talk
DESCRIPTION: Non-linear storytelling using video and Eko Studio.
TITLE: Case by Case
DESCRIPTION: Nature of Code final project that distinguishes lowercase letters from uppercase letters for little kids to use. The idea was inspired by a children’s app called Endless Wordplay and from a parent-teacher meeting for my 3-year old. His teacher informed me that he was really good at identifying uppercase letters, but not lower case letters. In preparation towards the next steps of reading, deciphering lowercase letters is something that we needed to work on since most of reading is strings of lowercase letters. And for myself, in effort to grasp the idea of neural networks and machine learning, I decided to work with Shiffman’s neural network of handwritten numbers using the MNIST database and apply letters to his sketch. Eventually, I would like to apply that model to identify letters and numbers in graphic illustrations, photos and different typefaces.
1) In building upon Daniel Shiffman’s Neural Network example from Nature of Code, which was also based on Tariq Rashid’s Make Your Own Neural Network, I want to use a training set of handwritten letters to distinguish between upper and lowercase letters as an initial step. I eventually want to take photos or illustrations of letters and numbers, similar to pieces in 36 Days of Type below, and teach the neural network to identify the letter or number.
2) Converting a test image of a number illustration into a bitmap file to add to the training set. Converted the test image into a 28 x 28 pixel greyscale image in Photoshop and then used Python to extract the pixel values from the photo illustration.
from PIL import Image
im = Image.open('um_000000.png')
pixels = list(im.getdata())
3) Creating the upper and lowercase training set, adding that to the data folder and build…
PROJECT: Mechanical Reproductions
DESCRIPTION: Using Francis Picabia’s collage, Very Rare Picture on the Earth (1915) as a blueprint and building a 3D version using the laser cutters, the CNC router and 3D printers. Also planning on adding a moving mechanical feature to it using gears and marbles or ball bearings.
• • • • •
• • • • •
STEP #1: The Sketch
I sketched Picabia’s painting in Illustrator keeping in mind that this 2d vector drawing would be laser-cutted and possibly 3D printed for some parts. I wanted to build a drawing of a structure from scratch that symbolizes a working system (ex: filtration of data or the human body), but I found it helpful to replicate and work off Picabia’s design. I will eventually adjust things to make the gears and tubes more functional to create movement of a ball or marble inside.
• • • • •
STEP #2: Materials
- Base/ Frame: To create the base and frame by layering wood on top of another and cutting it on a laser printer or using the CNC machine.
- Cylindrical Pieces: Using the CNC machine to make the cylindrical pieces.
- Gears and Flat Pieces: Laser cutting them from wood and adding metallic colors of silver and copper to make them more metal-like.
- Metallic Finishes: Going to look into gold leaf or patinas, but bought some metallic paints to also test on the wood.
FINAL DESIGN: 3-piece puzzle of a tree maze (Top Cover, Middle, Bottom)
– 2 player game to race against one another
– Pieces can interlock together and contain metal ball bearings for the game
After we figured out the design for the tree maze, our next steps were to see how our design would translate into the 3D print using the LuxzBot Taz and UltiMaker machines.
1.) INITIAL 3D TEST PRINT: In order to see how the specs would translate to the printed form, we printed our initial design file that we were playing with. We were able to figure out how to take the Tinkercad file, save it as a (.stl) file, load it into Cura to create the (.gcode) file, and then transfer (.gcode) file to an SD card to print on the LuxzBot Taz machine. The total print time for this piece was 2 hours. The main problem from that we ran into with this test run was the depressions printed raised lines, so the extruded parts of the opposite piece could not interlock to close.
2.) MAKING ADJUSTMENTS: We needed to adjust the files to make the depressed line strokes thicker and deeper so the walls of the maze could fit right in. Also from class feedback after showing our sketch in class, one suggestion was to make the maze more in-tune with the concept of a ‘tree’, so Jenn made the lines inside the maze more branch and leaf-like.
NEW TREE DESIGN
CURA VIEW – 3 pieces (bottom, middle, top)
3.) 3D PRINTING PROBLEMS: We made the pieces smaller hoping to lesson the print time, which for this job was estimated at 3 hours. The problem with this was that the pieces were lifting off from the bed and moving around. We tried printing these files a couple times by making the file bigger and waiting for the machines to cool down. During these trials, we noticed that the different color plastics also affected how hard and soft the 3D piece was. We had more success printing with the opaque white plastic, as opposed to the clear and black plastics that were more flimsier.
Prints lifting and moving from the bed
Many failed 3D printing jobs
4.) RAFTS AND BRIMS: While trying to solve the problem with the plastic lifting from the bed, we tried adding a raft and a brim in Cura to help keep the piece down as it printed. It still didn’t work well on the LuxzBot Taz, but when we used the UltiMaker machine, the print was able to finish without lifting off from the bed.
Printing on Ultimaker with raft around design
For this week’s assignment, I decided to think of a dataset that I would like to use with a supervised learning machine learning algorithm and want to concentrate on finding patterns and similarity within images. At first I looked at the Instagram API because I wanted to try to find ‘magazine covers’ from the feed without hashtags or geotags, but since the Instagram users own their images, there seems to be a lot of restrictions. So I remembered from another class that the Smithsonian API could also be used to explore the images and through that I found the Cooper Hewitt API, which seems to format the data as JSON files. Below is an example of how they used colors to classify the pieces in the museum.