Nature of Code / Final Project: Step 1

TITLE: Case by Case

DESCRIPTION: Nature of Code final project that distinguishes lowercase letters from uppercase letters for little kids to use. The idea was inspired by a children’s app called Endless Wordplay and from a parent-teacher meeting for my 3-year old. His teacher informed me that he was really good at identifying uppercase letters, but not lower case letters. In preparation towards the next steps of reading, deciphering lowercase letters is something that we needed to work on since most of reading is strings of lowercase letters.  And for myself, in effort to grasp the idea of neural networks and machine learning, I decided to work with Shiffman’s neural network of handwritten numbers using the MNIST database and apply letters to his sketch. Eventually, I would like to apply that model to identify letters and numbers in graphic illustrations, photos and different typefaces.

————

MOCKUP:


DOCUMENTATION:
1) In building upon Daniel Shiffman’s Neural Network example from Nature of Code, which was also based on Tariq Rashid’s Make Your Own Neural Network, I want to use a training set of handwritten letters to distinguish between upper and lowercase letters as an initial step. I eventually want to take photos or illustrations of letters and numbers, similar to pieces in 36 Days of Type below, and teach the neural network to identify the letter or number.

36 DAYS OF TYPE

 

2) Converting a test image of a number illustration into a bitmap file to add to the training set. Converted the test image into a 28 x 28 pixel greyscale image in Photoshop and then used Python to extract the pixel values from the photo illustration.

from PIL import Image
im = Image.open('um_000000.png')
pixels = list(im.getdata())

(SOURCE)

3) Creating the upper and lowercase training set, adding that to the data folder and build…

Piecing It Together / Week 10 & 11 / Proposal and Sketch for Final

PROJECT: Mechanical Reproductions

DESCRIPTION: Using Francis Picabia’s collage, Very Rare Picture on the Earth (1915) as a blueprint and building a 3D version using the laser cutters, the CNC router and 3D printers. Also planning on adding a moving mechanical feature to it using gears and marbles or ball bearings.

• • • • •
INSPIRATION: Francis Picabia’s machine paintings that were on display at the MoMa’s Francis Picabia: Our Heads are Round so Our Thoughts Can Change Direction exhibit and a past ITP project, Visualizing Time: A Marble a Minute.
• • • • •
STEP #1: The Sketch
I sketched Picabia’s painting in Illustrator keeping in mind that this 2d vector drawing would be laser-cutted and possibly 3D printed for some parts. I wanted to build a drawing of a structure from scratch that symbolizes a working system (ex: filtration of data or the human body), but I found it helpful to replicate and work off Picabia’s design. I will eventually adjust things to make the gears and tubes more functional to create movement of a ball or marble inside.
• • • • •
STEP #2: Materials
  • Base/ Frame: To create the base and frame by layering wood on top of another and cutting it on a laser printer or using the CNC machine.
  • Cylindrical Pieces: Using the CNC machine to make the cylindrical pieces.
  • Gears and Flat Pieces: Laser cutting them from wood and adding metallic colors of silver and copper to make them more metal-like.
  • Metallic Finishes: Going to look into gold leaf or patinas, but bought some metallic paints to also test on the wood.

 

Piecing It Together / Wk 9 / 3D Printing

FINAL DESIGN: 3-piece puzzle of a tree maze (Top Cover,  Middle, Bottom)

FEATURES:
– 2 player game to race against one another
– Pieces can interlock together and contain metal ball bearings for the game
PROCESS:
After we figured out the design for the tree maze, our next steps were to see how our design would translate into the 3D print using the LuxzBot Taz and UltiMaker machines.
1.) INITIAL 3D TEST PRINT: In order to see how the specs would translate to the printed form, we printed our initial design file that we were playing with. We were able to figure out how to take the Tinkercad file, save it as a (.stl) file, load it into Cura to create the (.gcode) file, and then transfer (.gcode) file to an SD card to print on the LuxzBot Taz machine. The total print time for this piece was 2 hours. The main problem from that we ran into with this test run was the depressions printed raised lines, so the extruded parts of the opposite piece could not interlock to close.

TEST PRINT
2.) MAKING ADJUSTMENTS: We needed to adjust the files to make the depressed line strokes thicker and deeper so the walls of the maze could fit right in. Also from class feedback after showing our sketch in class, one suggestion was to make the maze more in-tune with the concept of a ‘tree’, so Jenn made the lines inside the maze more branch and leaf-like.

NEW TREE DESIGN

 

CURA VIEW – 3 pieces (bottom, middle, top)

 

3.) 3D PRINTING PROBLEMS: We made the pieces smaller hoping to lesson the print time, which for this job was estimated at 3 hours. The problem with this was that the pieces were lifting off from the bed and moving around. We tried printing these files a couple times by making the file bigger and waiting for the machines to cool down. During these trials, we noticed that the different color plastics also affected how hard and soft the 3D piece was. We had more success printing with the opaque white plastic, as opposed to the clear and black plastics that were more flimsier.

Prints lifting and moving from the bed

Many failed 3D printing jobs


4.) RAFTS AND BRIMS: While trying to solve the problem with the plastic lifting from the bed, we tried adding a raft and a brim in Cura to help keep the piece down as it printed. It still didn’t work well on the LuxzBot Taz, but when we used the UltiMaker machine, the print was able to finish without lifting off from the bed. 

Printing on Ultimaker with raft around design

Nature of Code / Exercise 3: Datasets

DATASET
For this week’s assignment, I decided to think of a dataset that I would like to use with a supervised learning machine learning algorithm and want to concentrate on finding patterns and similarity within images. At first I looked at the Instagram API because I wanted to try to find ‘magazine covers’ from the feed without hashtags or geotags, but since the Instagram users own their images, there seems to be a lot of restrictions. So I remembered from another class that the Smithsonian API could also be used to explore the images and through that I found the Cooper Hewitt API, which seems to format the data as JSON files. Below is an example of how they used colors to classify the pieces in the museum.

 

Data Art / Project #3 / Place & Space

DATA ART #3: Place & Space
Paths in Space – Melissa’s Week on the Moon

………

GROUP
Melissa Parker
Anne-Michelle Gallero

………

PROCESSING SKETCHES
Melissa and I exchanged OpenPaths data and used it to create our own interpretation of each other’s week in outer space. Below is Melissa’s week on the moon visualized.

MoonPath from annemgal on Vimeo.

———

MoonPathAbstract from annemgal on Vimeo.

………

SOURCE CODE

………

PHOTO SOURCE
NASA

Piecing It Together / Wk 8 / 3D Puzzle

GROUP
Jennifer Tis
Anne-Michelle Gallero

………

IDEA
To create a 3D maze in the shape of a tree on either the 3D printer or on a wood panel using the CNC machine.

 

………

INSPIRATION

WOODEN MAZE ON WESTWORLD

 

MORE EXAMPLES OF WOODEN PUZZLES

………

PROCESS
1. After collecting some photos and links of 3d mazes and puzzles that we liked, we decided to continue with the maze idea as either a 3D printer object or using the CNC machine after seeing some examples of wooden mazes online.

2. Created a maze from a puzzle  generator online: mazegenerator

3. Started building the 3D version in Tinkercad and figured out that we could build the object in Illustrator and then import it in Tinkercad as a SVG file.


 
4. Initially, we were having problems importing the file because Tinkercad would disregard the complex shapes with grooves and simplify it into a filled-in piece. We needed to create the twists and turns of the shape as an object instead of a stroke. Attempted to rebuilt the maze from scratch in Illustrator to turn all the lines into a combined object (and not just lines).

 
5. Needed to find a simpler and faster solution of taking the stroke and turning it into an object… and after doing a google search, discovered “EXPAND.”  By applying the “EXPAND” feature in Illustrator, it turns the stroke into an object.
 
6. Importing the new SVG file into Tinkercad: The Tinkercad program still filed in the shape as a solid piece, so opened the SVF file in Fusion 360 and it maintained it’s original shape in there.


 
7. Next step will be to prototype and test the design on a 3D Printer.

Nature of Code / Exercise #1 / Binary Tree

full screen

EXERCISE #1Visual Binary Trees: I wanted to concentrate on something visual and understand binary trees in code. The samples above (heavily relying on Dan Shiffman’s video tutorial and source code on binary trees viz) are really basic playing with colors and shapes in p5 taking inspiration from Giorgia Lupi and Stefanie Posavec’s Dear Data graphs to help myself understand the mechanism of the code. I definitely need to work on this more to come up with something more original. Eventually, I’d like to model it after these neural illustrations done below by neuroscientist, Santiago Ramon y Cajal.

 

Data Art / Project #2 / Text & Archives

PROJECT: Aesop’s Fables

CONCEPT: Data Art project using archives from Project Gutenberg and taking the text for Aesop’s Fables to animate the stories and letterforms, while highlighting the fables’ lessons as an art installation for children.

VIDEO OF PROCESSING SKETCH:

DA_Fables_ProcessingVideo_3 from annemgal on Vimeo.

WORK IN PROGRESS:
The rough sketch below shows how I would like to play with the text and illustration to make the story come alive. I would like the text to be more graphic and dynamic. For instance, using the text as paint to break words apart from the sentences and animated the letters so they start falling like rain or scrolling line by line within the space to simulate water or wind. It’s something I need to work on if I continue developing this idea. I also want to take the list of characters (animals, trees, gods & goddesses, etc) and visualize their relationships and the number they appear in each story as another feature to this piece.

CODE:

Avant-Garde Art / Wk 6 / Final

 

ARTWORK: Found Tweets Colorized

DESCRIPTION: 9 small rectangular canvases (6″ x 4″), painted with a range of skin tones and displaying a single found tweet referring to essential human needs and wants.

PROCESS:
I started thinking more about immigration after doing my first assignment for a Data Art class.  My project needed to be more abstract and less info graphic-like with lists and numbers. This led me to try to think more about the issue of how to capture more of the essence rather than showing data in a graph or bar chart. When you look around yourself, especially in NYC, all the people that you encounter or ride the train with makes you question, “why does a person or family leave their original country?” And it mostly comes down to the essential human needs of food, money, work, second chances, safety, religious freedom and a better life. Since I just finished a Twitter Bot class and learned to do searches with the Twitter API, I wanted to try something with that. For the last 10 days, I took sample of data of each of the words [money, work, second chances, safety, immigration and a better life] and collected the results. And since results for the morning might be greatly different than results at night, I tried to tooks samples of data from each hour of day.

Monitors, Projectors or Paintings? Figuring out how to display the piece
I liked how my prototype as gifs could show more tweets and my original idea for the piece was to figure out how to get real time tweets on the searches, but I realized after reading through the data that there’s a lot of things to filter out, like retweets, racist comments, spam and I’m still trying to figure out how to code that. The process of reading all the data and trying to see patterns and find meaningful tweets, I felt like this piece was turning into a collage by using found tweets by chance occurrence. And with Marina’s feedback in mind and references that she suggested, I decided to turn this into a series of paintings which would help to make it be object like and a more permanent way of capturing a digital moment that would otherwise be lost in the digital abyss. The only downside is how to pick 1 tweet to represent a whole word found.

Tweets Colorized: Many Skin Tones
I had an idea at first to photograph people and color sample their skin tone colors for a more actual representation, but lucikly one of Marina’s references, she sent along the colourstudio site and I used that as a guide for the background colors of canvases.

FINAL VERSION AND REFLECTION:
This piece is a mash-up of many different artists’ works that I’ve encounter during this project and have been inspired by. This has been done before and to me this reflects conceptual and procedural art in the sense that it’s not the actual product, but the idea that’s impactful. Through this project, I gained a lot more insight into the many different meanings of these ‘words’ from many different kinds of people/tweeters. Twitter contains  a lot of noise that you would rather not spend your time reading, but sometimes there are a few ideas or statements that could resonate. And Twitter is a good places to get different perspectives after an specific event or a moment.