Tuesday, November 28, 2017

The following is an article I had published in the Sept 20 edition of U.S 1 Magazine
Say Again? Voco Says Yes
by Melissa Drift
The video has garnered more than 1 million views on You Tube since it was uploaded in November of 2016: Television personality and comedian Jordan Peele and Adobe’s community engagement manager Kim Chambers look on as Princeton graduate student Zeyu Jin presents a groundbreaking new software program that he is developing.

The presentation took place last year at the annual Adobe Max, a weeklong convention where the software company showcases its new products and features. Adobe is known for its digital editing programs, several of which are industry standard in fields like broadcasting, movie production, and web design.

The new application is called VoCo, and it promises to revolutionize audio production the way that Adobe’s iconic PhotoShop program did with photography. The user can type words that were not said in an original recording, and the program will generate a new recording of those words in the voice of the original speaker. With VoCo software you can make natural-sounding audio samples of people saying words and phrases that they never actually said.

Back to the 2016 video: The crowd laughs and cheers as Jin plays a recording of comedian Michael Key from the Comedy Central show, “Key and Peele.” Jin and the hosts cheerfully exchange jokes as he demonstrates how he can quickly change the recording to make Key say that he “kissed Jordan three times,” when the comedian has never actually spoken those words. At the end of the demonstration, Peele jokingly says to Jin, “you could get in big trouble for something like this!”

Surely, the ability to change audio recordings in this way brings up many questions. Jin and his faculty advisor, Princeton professor of computer science Adam Finkelstein, are concerned about this issue and say they are working on several different ways to detect whether their program has been used on a recording. “When we show people this project, many people immediately have that reaction, like ‘oh no, you could make somebody say something that they never said,’ and so we have to have this conversation,” Finkelstein says. “It’s not too shocking or surprising.”

Finkelstein says it was the same way when photo editing programs like Adobe’s PhotoShop came out. “People want to know, has this photo been edited, right? When PhotoShop came out all of a sudden it was possible to do all kinds of crazy things.” He refers to the public outcry over the February, 1982, cover of National Geographic as an example. The magazine had used digital editing to move two of the Egyptian pyramids slightly closer together in order to fit on the cover. “People have been asking these questions since the beginning of digital editing,” he says.

Speech synthesis and the ability to edit recordings are not new. It has been technologically possible to change recordings and even manufacture new words since the beginning of analog tape recording. Skilled radio and music producers would physically cut and reattach pieces of audio tape to rearrange or delete unwanted words or sounds in the recording. It was a laborious process, but it worked well enough.

The advent of digital technology, starting in the 1980s, changed everything. Broadcasters and amateurs alike now routinely use editing programs such as Adobe Audition, a standard in the industry, to easily fix recording mistakes, remove unwanted background sounds, and otherwise modify recordings. This ushered in a creative boon for amateur artists. Many things that were once only possible to do in a studio are now done quickly and cheaply by high school kids in their bedrooms.

Since the advent of YouTube 10 years ago, homemade movies and music have become mainstream. A notable example is the Japanese music composition program called Vocaloid, which shares some similarities to the VoCo project. The program allows users to make a series of stock synthesized voices sing whatever words and melodies the user wants them to. Each synthesized voice used in the program is represented as a different anime cartoon character. The artificial voices are manufactured by using sample recordings of real Japanese singers. In the past decade, Vocaloid music has become its own distinct genre. Composers typically make songs about creepy horror-related subject matter. The voices tend to sound child-like and robotic.

Addressing the difference between Vocaloid and VoCo, Jin says it’s easier to edit a singing voice than it is to edit speech. “Vocaloid synthesizes a pretty robotic voice, if you listen to them. But people like it because it sounds a little bit less human and very interesting. But they can do a lot of stuff like singing all kinds of styles and syllables. For speech, it’s actually way harder to make it more human-like. When you are singing the pitch is pretty stable per note. And the stable pitch is very easy to synthesize with a computer. The computer can just make a flat pitch or a pitch with a little bit of fluctuation to make it a little bit more human like. But for speech, the pitch is everywhere. We’re still doing the research on how people choose pitch in saying things. This is still a pretty open question.”

Currently if you want to fix a mistake or add new words to a recording you have two options: either invite the speaker or voiceover talent to redo the recording or manually fix the mistake yourself, either of which can be time consuming and cost prohibitive. If you want to change a recording to perhaps create a new word, you would have to hunt through the recording to find all the bits and pieces of sounds you need to create the new word and cut and paste them together. This can also be a very time consuming and costly process.

In order for it to work, all the sounds you need have to be present in the original recording. Jin says it would take around 30 hours of recordings for companies like Google and Apple to make their synthesized voices. Says Jin: “The problem [with those programs] is they only support one specific voice. You can’t make a Siri of your own voice.”

VoCo makes this process much faster and cheaper, and it can recreate any voice. The VoCo program only needs only a 20 to 40-minute audio sample. This is because VoCo works differently from other voice synthesis programs. The user first provides a written transcript of the recording. Unlike other programs, VoCo starts by generating a generic synthetic voice to match the written transcript. The program breaks down the original voice recording into small sound snippets called phonemes, which are the building blocks of words. The artificially generated voice is then used as guide to find the correct phonemes and build them into words. You type the words you want to change, and the program does so automatically. It’s similar to the find and replace feature in Microsoft Word.

Says Finkelstein: “It’s true that we’re making it easier to do this thing that could cause people to believe that somebody said something that they didn’t say. It is true that we’re moving toward making it easier to do that, and it’ll take time for people to adjust to that idea, just as it took time for people to adjust to the idea that you could move a pyramid in PhotoShop” — something unremarkable today.

Originally from Anhui, China, Jin, 28, has been in the United States for the past six years. He spent the first two years in Pittsburgh, earning his master’s degree in machine learning at Carnegie Mellon University. He is now a fourth year graduate student, working toward a Ph.D in computer science at Princeton University. Several music keyboards once used by the Princeton Laptop Orchestra are scattered on the desk in his office. Music and composition, he says, are among his many eclectic interests that led to his career in computer science and the VoCo project.

Jin, whose mother is a government worker and whose father works for Chery Motors, a Chinese car company in which he played a founding role, started playing with computers at the age of three and started the piano at four. He also plays the flute. He learned computer programming in elementary school. Jin says he didn’t really like playing the piano, at first, though. He likes music, but he found the traditional process of learning to play an instrument to be boring and repetitive. He wanted to find a way to use computers to make it easier. That’s what led him to this work. Jin cites his uncle with a Ph.D in computer science as a childhood influence.

At the age of 17 Jin released several albums of Celtic music in China. His passion for the music started when he got into the Norwegian music group Secret Garden as a kid, after the group had won the Eurovision song contest. “It’s very funny I listened to Secret Garden. That’s pretty classic Norwegian music and I was like ‘holy crap this is great music.’ So I kept listening to more and more and then I was thinking OK I’m going to write some music and that’s when I got started. So I was highly influenced by European music when I was in China.”

His albums weren’t especially popular. “People like pop music, and I’m writing Celtic music. And most Chinese people don’t know what Celtic music is. It’s very Irish-Scottish. It’s so lively.” He was surprised to receive some recognition for his music after moving to the U.S. He submitted one of his Celtic pieces to a music competition in Pittsburgh when he was living there and won second place. “There people don’t listen to Celtic music much. So with just like 30 or 40 endorsements or something, I was ranked second in [the competition]. I was like how can that happen, and it’s under the Celtic genre, and no one is listening to Celtic music?”

During his time at Carnegie Mellon Jin helped create an app called Music Prodigy. The app teaches people to play piano or guitar by automatically detecting the notes they play. His academic advisor there, Roger Denenburg, was the chief scientific officer of the music education startup. “He was doing all the scientific stuff there, and he brought me in because I could help with a little bit of development. And that’s when we actually got connected. After one year we realized that my knowledge can actually improve what’s being ­done there, so I got more involved with the company, with the core technology.”

Four years ago Jin came to Princeton intending to do music-related research in the Princeton Sound Lab (now defunct, the lab had conducted research in computer-generated music). He had originally wanted to make a program that could turn recordings of the human voice into musical instruments, especially the violin. In his office, Jin plays a recording of himself humming a simple melody. He had transformed it to sound like a violin. It sounded impressively realistic. He said the reason for that is because it allows the emotion in the user’s voice to come through in a way that can’t be emulated with any existing type of music synthesizer.

Jin says he has put this music project on the back burner to focus on VoCo for now, but it works on a very similar principle. “The method is I recorded a real violin but the violin is basically [only] playing scales.” He sings a scale to demonstrate. “That’s the training data. That’s the material we need to make a more natural violin sound for converting your voice.”

After the Sound Lab disbanded Jin had to find a new faculty advisor. Adam Finkelstein, a Swarthmore alumnus, Class of 1987, and a University of Washington Ph.D. who has been at Princeton since 1997, decided to take Jin on board. Before VoCo, Finkelstein was mostly known for his work with another graduate student on a project called Patch Match. That work led to what eventually became the content aware fill feature in PhotoShop.

While teaching at Princeton, Finkelstein has taken several sabbaticals at Adobe, Pixar, and Google.

The main focus of Finkelstein’s research had been visual applications like video, animation, and computer graphics. He didn’t intend to change that. “How I thought it would work was that I thought I would convince Zeyu to work in video. Because in a way, audio and video are certainly related,” Finkelstein says. “There were some things I wanted to do in video editing. In fact he did start working on those things. I thought I would get him interested in working on video, but in the end what happened was Zeyu got me interested in working on audio. He sort of pulled me over to that side.”

Jin had been working with Finkelstein on some of his video-related research when they realized the need for a program like VoCo. “I’ll tell you why I’m interested in it,” Finkelstein says. It’s “because many, many times, I have made a narration for a video. I record the audio by reading a script into a microphone and then later we go and edit my recording, and I sometimes wish that I had said something a little differently, and it’s really hard to go back and re-record. So right from the very beginning I wanted this project to work because I wanted to edit my own voice in my own narrations to adjust what I was saying during the editing process.”

With existing technology “you have two choices: Live with what you recorded or go back and re-record. But when you go back and re-record, it’s very difficult to get the voice quality to match well because the microphone settings are a little bit different and the layout of the room is a little different and it just sounds different. Not only is it a pain to set up the microphone and set up the room and re-record the one sentence you wanted to change, but then you put it in there and it sounds different. It’s so much easier if I could just type the word,” and, he adds, the sound matches better.

Researchers are developing other features to help mismatched audio sound better, but they are still not very user friendly. Being able to type what you want makes VoCo’s way very convenient. “Maybe, eventually, commercial software will be able to let you do that too, but even so, it’s still a pain to go back and re-record it. So I’d rather just be able to type what I want it to say.”

The researchers at VoCo are not quite at the point where they can magically reanimate the voices of the dead — though that is one of their ultimate goals. It is research that Jin is doing for his Ph.D, which he expects to receive next year. Jin presented this project at the ACM SIGRAPH conference in July, and his paper has appeared in the ACM Transactions on Graphics journal. Finkelstein says that at this point, he is not even sure which Adobe product it will be used in, or if it will even be used at all.

“The truth is that we’re working on being able to synthesize longer sentences, but right now we can only typically synthesize one or two words, a short phrase before it starts to sound awkward or bad. And it’s not always even successful with a short word.” They need at least a 20-minute recording to make the audio samples of new words. In the paper, they tested study participants to see how often they could distinguish words and phrases edited by VoCo from those that were unedited. They couldn’t tell the difference more than 60 percent of the time, on average.

Their next goal is to generate full sentences. “It’s a work in progress, but I’d like us to eventually be able to have the ability that you could type a full sentence and have it sound reasonably natural in somebody’s voice,” Finkelstein says. “I think a careful listener even just with their ear can hear it. We may also want to work on the project that tries to automatically detect if this operation has been done.”

Jin says they are working on two potential approaches to detect whether VoCo has been used on an audio recording. One potential solution that they have not tested yet is to embed an audio fingerprint into the words that have been synthesized by VoCo. An inaudible frequency is embedded into the sound and can only be extracted by computer analysis. “Even if you make a filter to edit that synthesized word, it won’t remove the fingerprint,” says Jin. “So we can always recover the fingerprint and say this has been edited.” He thinks that’s the best approach. A person would have to be an expert at audio editing to figure out how to remove it.

He says they could make the exact frequency and how it was programmed a secret, like a password, so people wouldn’t be able to figure it out. They haven’t actually begun to work on that part of it, though. “There is a shortcoming to that approach,” Jin says. “Because this paper is out, some people can implement VoCo and make their own version so they can just get out of any censorship. So to address that problem, our actual approach is to use deep learning to learn the pattern of the VoCo algorithm. When a VoCo-like algorithm is editing the waveform there is a trace of how it is edited inside the waveform. It’s very subtle.

“But we believe that we can actually use deep learning to find out what trace this editing process gives you. And actually in our lab experiments we’ve found that it’s pretty effective for finding VoCo-like editing. Even human editing can be detected. Like you actually re-record that voice and you manually insert that word back, we can also detect that edit as well.”

Deep learning is machine learning based on neural networks. Here’s how it works: “We create all kinds of examples of VoCo-edited words and non-edited VoCo words and we give the computer a lot of samples and teach it how to recognize whether an audio clip is edited or not — basically training it through examples,” Jin says.

The ability to turn your voice into a violin is one potential side use for the VoCo technology. Finkelstein and Jin are also excited about the possibility of using VoCo to return the original voices to people who can no longer speak. People in this situation have to use a generic artificial voice that takes a very long time to make — physicist Stephen Hawking, for example. When film critic Roger Ebert lost his voice due to cancer, a team of Scottish audio engineers worked to create a synthesized voice that actually sounded like him. They were only able to do so, however, because they had hours upon hours of recordings from his years on television. VoCo could allow people with disorders such as ALS or cancer to use their own voice much more quickly and at a lower cost.

Jin stresses that the VoCo project is still far from completion. “We’ve had a lot of very public attention recently, but I feel like there’s a lot of things to be done before we can say this is something you can use in your project. In time, we will finally make it happen, but that YouTube video last year, that was the so-called Adobe sneak, which means it’s technology that’s brewing in the lab. But it’s something still in the lab, so it’s not like a product or anything like that.” He said he wouldn’t expect it to be a product for “quite a few years.”

“We can’t comment on whether or not VoCo will be used in any Adobe product. We can only say that we hope it will,” Finkelstein says.

Sunday, October 23, 2016

The following is a piece I recently had published in local central NJ magazine called
Princeton Echo.
This version includes formatting and text boxes with
quotes from the print edition which were not
included  in the original online version.
This piece if posted on the Echo's website at:
http://mercerspace.com/princeton/going-behind-the-scenes-of-princeton-neuroscientists-cutting-edge-research/

Going behind the scenes of Princeton neuroscientists’ 

cutting-edge research

Melissa Drift reports from inside the
Princeton Neuroscience Institute















Researcher Taylor Webb sets up a TMS experiment with Branden Bio, a first year grad student in psychology and neuroscience.
Researcher Taylor Webb sets up a TMS experiment with Branden Bio, a first year grad student in psychology and neuroscience.

By Melissa Drift
‘Have you done this before?” the woman asks. “I have,” I answer, as I sign the last consent form. “Have I ever been injured by a metallic object or shrapnel?” No. Have I ever done metal work like welding?” No. “Do I have a pacemaker?” It goes on. The lengthy form lists every imaginable reason that one could have metal in the body. I check “no” for each question. I remove my shoes and she checks me with a portable metal detector like the ones used in airports. Once she’s confident that I’m 100 percent nonmetallic, we enter the scanner room.
I lie on the table and am drawn head first into the white, cylindrical machine. My legs stick out at the end. My head is secured inside a plastic cage-like contraption. It’s an especially tight squeeze because of the plastic prescription goggles I have to wear to see. I can’t wear my regular glasses because of the metal in them. In my right hand, I grasp a joystick, my fingers carefully placed on the buttons. I wonder, purely out of curiosity, if I could climb out of this thing by myself if I had to. In my left hand, I grasp a rubber squeeze bulb like the one on a blood pressure cuff. I can squeeze it at any time to stop the scan if I should feel the need. I decide to proceed and the test goes on.
A special mirror inside projects a computer screen from the next room. The image is of stars in space. I find it ironically appropriate for the situation. I feel like I’m inside a planetarium. Corny Star Trek cliches come to mind. Depending on the day, this machine may become your personal planetarium, movie theater, or video arcade.
This will be my seventh brain scan in two years. But no worries; there’s nothing wrong with me. This is no ordinary scan and I am not at a medical facility. I’m in a lab on the Princeton University campus. Believe it or not, this is my job for today. I am getting an fMRI scan. It stands for functional magnetic resonance imaging. It’s a special type of scan, used only for research purposes. A regular MRI, (without the f), used for medical purposes, takes still pictures of the brain, revealing nothing of the way it works. A functional MRI, on the other hand, shows which areas of your brain you are using to perform mental tasks. It does so by tracking the flow of blood over time in different areas. It is a mainstay of modern neuroscience research.
I imagine the powerful magnetic field penetrating my body, as I hear the pulsing buzzes and I swear I can almost feel it. The scan takes an hour. In an adjacent room, the inside of my head is on display for all to see. I think to myself, “I’m getting $20 for this?”
With a series of loud buzzing sounds, the machine switches from planetarium to arcade. Today’s task: a shape matching game. Point the joystick at each shape as soon as it appears on and press the correct button for each one. Try to score as many points as possible.
I imagine the powerful magnetic field penetrating my body, as I hear the pulsing buzzes and I swear I can almost feel it. The scan takes an hour. In an adjacent room, the inside of my head is on display for all to see. I think to myself, “I’m getting $20 for this?”
Many participants in these studies are college students who do it for course credit. Not me. I am not affiliated with Princeton University in any way, shape, or form. These paid experiments are open to the general public. (See sidebar, page 13, for details on how to participate.)
Although I have no formal education in science or psychology, I have always been fascinated by science, especially the workings of the human brain. Since high school, whenever I went to the library, I often went for the neuroscience and psychology books first. I also enjoy science documentaries and television shows. So I thought it would be cool to participate in neuroscience experiments. But I always assumed I would have to look to another university, perhaps in New York City, if I wanted to do something like that.
Had it not been for a very unusual post on Craigslist in 2012, I never would have known about the opportunities at Princeton. Checking for freelance gigs in the TV/film/video section, I saw this: “Actress needed to Perform Monologue $1500,” with Princeton listed for the location. I’m no actress, but for that amount of money, it piqued my interest. I clicked the ad and to my surprise, it was for a very involved fMRI study at Princeton University. It was posted by a graduate student from the lab of Professor Uri Hasson, who is known for his research on the phenomenon known as neural coupling. Researchers have found that when people listen to a story their brain activity will match up with that of the person who was telling the story. This was a breakthrough in the understanding of human communication and had received a lot of media attention.
The graduate student was looking to test this theory further. She needed someone to perform a monologue while being scanned so that she could compare that person’s brain activity to her own, when she had performed the same monologue. She needed the person to be scanned multiple times. Unfortunately, I didn’t get to do that particular study because it was canceled, but the ad gave me the idea to Google “paid research at Princeton University.” Princeton does not have a medical program. So I was surprised to find that I could participate in multiple fMRI studies 10 minutes from my house and make money doing it.
Princeton researchers do not normally post on Craigslist. I find out about experiments by checking my account on the Paid Psychology Experiment website otherwise known as Sona, for the name of the company that hosts it. It’s a version of the same website used by many other Ivy League schools for this purpose. Studies are posted there, with lists of available appointment times. Most experiments are designed to take one hour to complete. They are run at various times throughout the week and on weekends, so it is possible to accommodate many work schedules. I usually try to sign up for two studies back to back so I can make at least $24 per trip. Though the labs are all part of Princeton University, they are independent entities and most of their projects are separate.
Most studies don’t pay anything near the $1,500 promised in that Craigslist ad. The pay is $12 per hour for computer tasks and $20 per hour for the more involved experiments like fMRI and EEG. For many studies, you may even receive a few dollars more as a bonus incentive, based on your performance. Depending on the season, I typically make an extra $24 to $40 per month on average. On a few occasions, I’ve made up to $130 in one day for more involved experiments.
The studies take place at a state-of-the art new facility tucked away behind the pedestrian bridge that spans Washington Road. Two stories, plus two basement levels on one side encompass the Princeton Neuroscience Institute. The attached five-story wing is called Peretsman-Scully Hall and is home to the university’s psychology department. Renowned Spanish architect Rafael Moneo’s open and eco-friendly design incorporates numerous glass walls to let in as much natural light as possible. Cheerful doodles and equations reminiscent of “Good Will Hunting” decorate the large windows between offices. This growing academic facility is attracting top scholars from all over the world. I have visited this building many times since it opened in 2014.

As of this writing I have completed seven brain scans, several EEGs, and countless computer-based tasks. Some of the more notable visits include a sleep test where I had to take a 90-minute nap while hooked up to an EEG machine. I have also participated in several experiments where I watched movies. For those, you typically do something like describing what the movie was about or pressing buttons when you see something on screen that you’re supposed to pay attention to.
Some of the more fun ones were the two video arcade-like machines in the lab of professor Jordan Taylor, who investigates the computational processes involved in humans’ motor control skills. One has a special lever on the bottom that senses your movements. It feels like the controls on a piece of construction equipment. You use it move objects on a screen, projected onto a horizontal surface in front of you. The other machine is similar but smaller. You control it with a special pen that has a sensor in it. For each machine, you use either the lever or the pen to throw dots at targets on the screen. They are a lot like the vintage video game, Pong. The purpose is to test their theories on hand-eye coordination.
Those are the more unusual ones. Most paid experiments are a lot less involved. They are simple computer tasks that are sometimes similar to what you see on the Science Channel show, “Brain Games.” Some labs that run them have even been featured on that show before. They typically test things like visual perception, memory, and decision making.
Ones that I have done include repeatedly judging the number of dots on the screen, memorizing pictures, taking surveys, and rating characteristics of faces as they flash on screen. Many of these applications are intended to become fMRI experiments but need to be tested outside the scanner first.
When I was in college 10 years ago, I had a professor who got his PhD in psychology at Princeton in 1954. Scott Parry was still active in the Princeton University community as a carillon player and organist in the University Chapel. He wrote several books on effective communication and workplace training techniques. After selling his consulting firm, Training House Inc., he spent his retirement teaching courses in public speaking and communications at Mercer County Community College, where I was a student.
One of his favorite stories was about the isolation experiments they used to do when he was a graduate student. Researchers put participants in a room with absolutely nothing to do for extended periods of time. To fight the boredom, participants would talk to themselves. They would tell jokes. After a while, they would start to tell the same joke over and over. Ironically, Perry would tell that same story over and over.
Psychology researchers have been known to do bold things. The most famous examples are the Milgram experiment at Yale in 1961 and the Stanford prison experiment in 1971.
The Milgram experiment tested how far people would go when following orders from a perceived authority figure. Participants administered a memory test to an actor who they believed was another participant. They were instructed to give him an electrical shock for each wrong answer and participants were told to increase the strength of the shocks each time he did so. The buttons they pressed to give the shocks didn’t really do anything. The man purposely gave wrong answers. As the strength of the shocks supposedly increased, he would scream and pretend to have chest pains. Experimenters encouraged participants to keep going even when they expressed concern for the man’s wellbeing. They were not always told afterward that the shocks were faked, or that the other man was an actor. Many were deeply disturbed by what they did.
In the Stanford prison experiment college students were divided into groups of “prisoners” and “guards.” To make the experience as authentic as possible, the “prisoners” were even arrested and processed by actual police officers. They were held by the “guards” in a makeshift “jail” on the Stanford University campus. The experiment was intended to last for two weeks, but was called off after six days because participants were taking their roles surprisingly far. The group acting as prisoners rioted and showed signs of extreme emotional distress. The interactions of the “guards” with the “prisoners” became increasingly abusive over the course of the experiment.
Controversial projects like these pushed the boundaries of ethics and precipitated major reforms. Researchers probably couldn’t get away with anything like that today. Researchers can still be vague, however, about what an experiment actually is before you do it. A typical study title may be something like “Reacting to Visual Stimulus.” Then I get there and find out the “stimulus” is actually something like playing cards or looking at pictures of houses. They usually don’t want you to know the true purpose of an experiment ahead of time because that knowledge would bias your performance. So I often go into a test appointment not knowing exactly what to expect.
But there are still lots of rules. This is apparent in the documents I must sign to collect my payments. I am required to sign a consent form telling me what exactly I will be doing during the experiment. Experimenters are required to warn me about any disturbing or upsetting images that may be presented. For fMRI I also sign an additional safety consent form and fill out a basic medical history screening form. All of this information is kept confidential. After the experiment, the researchers are required to debrief me on what it was truly about. I receive a document for that and sign yet another to receive the money.
The address and phone number for Princeton University’s institutional review board (IRB) are everywhere. An IRB is a committee formally designated to approve, monitor, and review biomedical and behavioral research involving humans. The purpose of the IRB is to assure that appropriate steps are taken to protect the rights and welfare of humans participating as subjects in a research study. The IRB must approve all experiments before they are conducted.
Another very important IRB rule is that I can stop the experiment at any time and still collect the money. My first EEG experiment stands out in my mind for this reason.
“Are you sure?” the man says. I forgot that the EEG test required the use of silicone gel in my hair. EEG — electroencephalogram — monitors brainwaves via electrodes placed on your head. “You don’t have to do anything you don’t want to. You can leave right now and I’ll still give you the money,” he says. I think of the Milgram experiment and appreciate his concern. “No, I already showed up, we might as well,” I say. I’d rather not be seen in public with goopy hair, but it’s not that big a deal.
The gel feels cold and slimy as he squirts it onto my head through several small holes in the cap I’m wearing, similar to a bleaching cap. He affixes several electrodes to my face. We enter a room with a thick, scary-looking door that looks like it belongs on a bank vault — or a secret dungeon. Its purpose is to block electrical interference that could drown out the very faint signals from the brain. Radio frequency shielding, they call it. I sit atop a doctor’s office exam table and do the task. I have to memorize words from a list and press a button when they appear on the computer. The researchers administer the test from the next room. They give me instructions through an intercom. I imagine I’m a contestant on some sort of bizarre game show.
This particular test is being run by a post-doctoral associate (post-doc), a recent Ph.D. graduate who intends to eventually teach or run his or her own lab. It’s less common to run across post docs. Most people I meet doing this are graduate students or research assistants.
I finish the test and return to the other room. As I remove the cap, he offers me the use of a large, industrial-looking lab sink to wash the gel out of my hair. A bottle of VO5 shampoo sits on the counter. Cabinets just like the ones from all my high school science classrooms are stacked with neatly folded bath towels.
“What documentaries do you watch?” the post-doc asks. An EEG takes a while to set up and break down, leaving plenty of time to talk. As he disconnects the machine, we chat about things I’ve seen on science shows — how labs are coming up with ways to run fMRI scans on all kinds of animals, even dogs. He explains to me how fMRI scans are done on mice. I tell him how I first got interested in these things watching Scientific American Frontiers with Alan Alda. After his role on MASH, he hosted the series for PBS. He interviewed scientists across the United States and participated in many behavioral experiments. It was one of my favorite television shows when I was in high school.
“I think he’s had his brain scanned for science more times than anybody,” I say.
“Don’t be so sure!” he says in a startlingly matter of fact tone. “Some of us could probably beat his record.” And I thought the seven times I’ve been scanned is a lot! I wanted to find out more.
That man has moved on to a teaching position at another university, so I interview one of his colleagues, Aaron Bornstein. He is a post doc in Princeton’s Computational Memory Lab, under professors Ken Norman and Jonathan Cohen. They study memory and decision making. He doesn’t remember the exact number of times he has been scanned in the past 10 years. “Probably less than 100. Certainly more than 30,” he estimates.
It’s very common for neuroscientists to test experiments for their colleagues when they are in the initial planning stages. “I think it’s just practical for those of us who run fMRI experiments to be scanned regularly, so we get a sense of what it’s like to be inside the scanner,” Bornstein says. For example, he is careful not to change the color and brightness of the screen too rapidly because it sometimes hurts his eyes. He says it’s like when you’re in a dark movie theater and the lights come on. Your eyes have to adjust. “We try to make it very comfortable for people to look at the screens. I [also] try not to keep people in there too long; try to give them lots of breaks. It’s just good to remember what it’s like to be in there.”
I always knew that fMRI is a very safe, innocuous procedure. It doesn’t use dangerous radiation like x-rays or CAT scans do. You don’t have to drink any special liquid or get injected with anything. It works by generating a powerful magnetic field and radio waves. The only serious risk is if you bring metal near the machine. Ferromagnetic metal objects may be pulled forcefully into the machine, causing severe injury if they hit a person. Some nonmagnetic metal objects can also be dangerous because they may get hot and cause burns under certain circumstances. This poses a continuous threat because the magnet cannot be turned off. Some types of nonmagnetic metal implants like braces and permanent retainers can be perfectly safe, but many researchers don’t allow them because they cause small blurry spots on the scan that make certain brain areas harder to see.
That’s why the university takes so many precautions. The researchers have a lengthy screening questionnaire listing every imaginable reason you could have metal in your body. After checking that, the researcher verbally asks if you have certain implants or clothing items. Unlike some facilities, Princeton doesn’t require you to wear a hospital gown, but you are warned in advance to wear clothing with no metal in it. They check you with a portable wand and there are additional metal detectors at the entrances to each of the two scanner rooms.
As long as your body is free of metal, both fMRI and plain medical MRI are thought to be very safe. The only other potential concern is noise from the machine. It’s very loud. That’s why they give you earplugs. Participants are warned that there is a slight possibility of harmless peripheral nerve stimulation, which could cause an uncomfortable tingling sensation. I’ve never experienced that. Claustrophobia can also be an issue. That’s why you’re given a rubber squeeze bulb to hold during the scan. It sets off an alarm to alert researchers if you want to stop the test. People with even the slightest propensity for that problem are urged not to participate.
               Controversial projects like the Milgram
                 experiment  at Yale in 1961 and the 1971
                Stanford prison experiment precipitated
                major reforms. Researchers probably
                couldn't get away with anything like that today.

Universities also take precautions that medical facilities don’t. They want to be sure not to put anyone at unreasonable risk, since there is no medical benefit to justify it. That’s why pregnant women are not allowed to be scanned for research. It’s thought to be safe for the women and fetuses but research facilities err on the side of caution. These policies are mandated by the federal government and the university’s institutional review board.
Surprisingly, there are no limits on the number of times a person can be scanned, though. Surely there must be concern about some possible long-term risk from exposure to the radiofrequency and magnetic fields. I ask Bornstein about this.
“Nope,” Bornstein says confidently. “Before I even started doing fMRI there had been people working and being subjects and having medical MRIs done on them for decades and there haven’t been any problems reported.” Bornstein, who has worked in a number of labs across the country since 2005 and has been at Princeton for the past two-and-a-half years, says he would be more concerned about risk from frequent plane travel than any potential risk from getting scanned so often. And a lot more people fly than get fMRIs. “I think [fMRI] is pretty safe,” he says.
What would Princeton researchers do if someone showed up with a problem like a tumor? “We are not medical doctors,” Bornstein says emphatically. “And the scans that we perform are not medical scans. They’re not likely to detect any kind of real problems.”
The university has a two level certification process for anyone who works with fMRI. For level one, all researchers must take a standard federal human subjects research ethics course and score perfectly on a tough written exam about issues of safety and emergency procedures. To achieve level two, they must train under an existing level two person and pass a two-hour practical exam for which they run through a scan session and discuss proper responses to every possible emergency. Only level two people are allowed to actually operate the machine.
The university has protocols in place in case there is a suspicious finding on a participant’s scan. But these experiments are no replacement for medical tests. If they do happen to find something, they will tell the participant to get checked by a licensed medical doctor.
Neuroscience is a diverse field that crosses many academic disciplines. Neuroscientists may have backgrounds in psychology, computer science, molecular biology, physics, or math, just to name a few. Originally from New York, Bornstein has a bachelor’s degree in mathematics from MIT and a PhD in psychology from NYU. “I got into the field because I thought it was an interesting question: how does the brain work?”
He has previously worked in computer security and knows how to write software. He writes these game-like computer based experiments that he uses in his research. This is common at Princeton. “It’s not necessarily the same as writing the production software that you would see shipped by a commercial company, but you do have to write software,” says Bornstein. “I think everyone here knows how to program at least a little bit.”
He is particularly interested in studying how people make decisions about money. He designs computer-based behavioral tasks to test his theories. The few I have participated in typically involved learning associations between pictures of places and objects. They tended to be the type of task where you could earn extra money based on your performance.
A common theme on science shows is self-discovery. Countless documentaries show people finding out all kinds of revelations about themselves: Answers to lifelong questions. Why they’re good at music. Why they have struggled with a particular learning disability. Don’t expect to have that experience doing paid experiments at Princeton. You will be disappointed. When you see something like that, it’s likely the end result of years of research.
Safety First
If you participate in the “brave new world” of human experimentation at Princeton University, it may be comforting to know that there are several “big brothers” looking out for you. The university researchers are subject to the regulations of the Food and Drug Administration (FDA) and Department of Health and Human Services, including the Office for Human Research Protections. As part of those regulations, research projects are also subject to scrutiny by an institutional review board (IRB) empowered to approve, require modifications to, or reject research projects.
            Princeton University has its own IRB. One of its members is Leigh Nystrom, the director of the Scully Center for the Neuroscience of Mind and Behavior at the Princeton Neuroscience Institute.
            “The IRB is concerned with federal laws and guidelines but also with the general well-being of the participants, over and above the minimum standards set by the U.S Department of Health and Human Services,” says Nystrom. “As it happens, Jonathan Cohen (a co-founder of the Princeton Neuroscience Institute) and I have been conducting fMRI studies together since the early 1990’s when fMRI was first invented, so in a way our policies have likely influenced some of the current community standards at other institutions. There are professional societies (such as the Organization for Human Brain Mapping) and funding sources (such as the National Institute of Mental Health) in which Cohen served during the early years of this technology, probably helping to steer some of the policies recommended by these organizations”
It takes a long time to get to that point. It can take months just to analyze the results from a single experiment. Researchers aren’t able to tell you anything about your brain activity right after a scan. Says Bornstein: “There’s very little you can tell about a person from the kinds of studies we do. It’s not clear that the field knows enough about individual variation yet.” The particular field to which he refers is computational neuroscience. Much of Bornstein’s work is about finding better ways to collect and analyze data. “We know quite a bit about what brains do in general, but the range of differences among people is still a matter of significant research,” he says.
One reason for that, Bornstein says, is “because the data is very noisy and so what you need is an average of many, many people. We do about 32 people per study now. And that will give you reasonable, statistical effects.” Some examples of “noise” would be fluctuations in the magnetic field, or the person’s mind wandering at a particular moment while they are being scanned. Researchers need to be sure that brain activity they observe is actually from the experiment and not one of those other factors. They need to collect a lot of data from a lot of people before they can know for sure. “The hope is that if a single person does hundreds of repetitions of the experiment, hundreds of trials within the experiment, and many, many people do it over and over again, that all that noise would just average out,” says Bornstein. That’s why many behavioral experiments will have you repeat the same action for the duration of the study. Think rounds on a game show, or levels in a video game.
It’s also why people who are left-handed are excluded from some experiments. Researchers want to be sure that everyone is doing the task in a consistent way. For example, all subjects may be expected to use their right hand to press buttons in a computer game, and reaction time may be slower in people for whom that is not their dominant hand.
Every research technique has its benefits and drawbacks. One drawback of fMRI is that it’s only correlational, not causational. In other words it observes what the brain does, but doesn’t show cause and effect. Just because more blood flows to a certain area when you do a task, it doesn’t necessarily mean that it’s because that part of the brain is needed for the task. This isn’t an issue for Bornstein because he’s only interested in observing.
I talk to another researcher, Taylor Webb, a fourth-year graduate student who wants to take things to the next level. To do that he uses a more intrusive research technique — a tool that actually does something to the brain. This is the most surprising type of human experiment being done at Princeton and the only one I haven’t personally experienced. It’s a machine that sends pulses of magnetic energy into the surface of the brain in order to directly affect its function. This technique is called transcranial magnetic stimulation, or TMS.
If you Google “TMS,” you will find videos of it being used for some pretty amazing things. Depending on where the magnet is placed on a person’s head, it can have a variety of effects. If it is placed over the brain’s sensory motor strip, it causes a person’s fingers to move. If it is placed over a person’s speech center while they talk, it momentarily disrupts their speech. Scientists are using it to treat autism symptoms with surprising success and figuring out how to enhance brain function of healthy people in all kinds of ways. It is used therapeutically to treat depression and is being studied for use in other mental illnesses, as well.
This is absolutely not to be confused with ECT, or electroconvulsive therapy, commonly known as electro-shock therapy. Though both can be used to treat depression, ECT and TMS are very different. ECT is a medical procedure, done only in a hospital setting with anesthesia. ECT administers an electrical shock to a large area of the brain in order to produce a seizure. The seizure is thought to help depression symptoms. Side effects include memory loss and confusion.
TMS only briefly affects very small, pinpointed brain areas. It doesn’t have significant side effects and the last thing researchers ever want to do is to cause a seizure. TMS is performed by placing a figure-eight-shaped magnet, called a coil, over a person’s head, above the brain area they want to stimulate. The coil is attached to a machine.
It doesn’t produce a direct electrical current that gives you a shock in the traditional sense. It works instead through electromagnetic induction. The TMS coil induces a strong but spatially confined magnetic field around itself. When a magnetic field changes in strength very rapidly (turning on and off for a fraction of a second), the field can induce a corresponding electrical signal in a conductor under the coil. In this case, neurons in the brain act as the conductor. Says Webb: “It can only effect superficial areas of the brain (areas that are on the surface). It affects a one to two-centimeter-wide area of the cortex, no deeper than that. That sort of scrambles the signals being made in that brain region, so you can very, very slightly impair function in that brain region, very temporarily, and test the effects that it has.”

                TMS can sound like a form of mind control
                from a sci-fi show, but it's actually a very
                subtle, non-invasive stimulation of the brain.
So what amazing thing is this machine being used for at Princeton? Studying consciousness. Webb works in the lab of professor Michael Graziano, who is well known in this field of study. Graziano has written several books, including the popular “God, Soul, Mind, Brain” and “Consciousness and the Social Brain.” He has also written several fiction and children’s books. He uses a orangutan puppet named Kevin to demonstrate his theories in lectures and public demonstrations. Graziano is also known for his discoveries about how the human sensory motor strip in the brain works. He used TMS to prove that the body map within the brain is a lot more intricate and detailed than previously believed. Graziano and his colleagues have a specific theory about the relationship between attention and consciousness.
The opportunity to use TMS to test this theory is what attracted Webb to Princeton after receiving his undergraduate degree in neuroscience from the University of Southern California. Specifically, the researchers are interested in unconscious effects on attention. “There are subliminal stimuli that participants aren’t conscious of but we can show [that it] impact[s] their attention in subtle ways,” Webb says. This means that people can still technically see something and respond to it, even if they are not consciously aware that they see it.
“We think that a region in the parietal lobe plays an important role in that,” Webb says. “We see that region becomes active when people are conscious of stimuli but not when people are unconscious of them, even when those stimuli are drawing attention. So it doesn’t have to do with attention, it’s more about consciousness.”
The specific brain area he studies is called the temporoparietal junction (TPJ). You actually have two temporoparietal junctions. They are located slightly above and behind the ear on each side of your head. This is an important brain area that is of interest to many scientists. It has been found to be involved in information processing and the perception of oneself and others. Webb is studying the role it plays in consciousness and controlling attention. He explores this by testing how people respond to visual cues that they are not consciously aware of.
I have never had TMS myself, but I have participated in several versions of Webb’s studies that will eventually include it. His experiments most remind me of something I’d expect to do at an eye doctor’s office. You sit in a special adjustable chair, usually with your head in a chin rest, and press buttons when you see a particular object that you’re supposed to pay attention to. Objects may be letters, numbers, or shapes, that flash on screen very quickly.
The general goal of these experiments is to test whether you can still see an object and respond to it without being consciously aware that you see it. He wants to see if and how often you will press buttons in response to objects you saw on the screen, even if you’re not consciously aware of them because they flashed too quickly, and if these stimuli will affect which parts of the screen you pay attention to. “Basically our main question is, if we apply TMS to this region of the parietal cortex, the TPJ, will it make people less likely to become aware of that stimulus — to notice that subtle difference? And as a separate question, will it make it so that the stimulus doesn’t draw attention? Our hypothesis is that those two things are independent. It might still draw attention even though people are less likely to become conscious of it.”
The effects you can expect from participating in Webb’s experiments are likely to be far less dramatic than what they show in science documentaries. There are actually two types of TMS. The more popular type is called repetitive TMS, or rTMS. This is the kind of machine that’s used for therapeutic purposes and produces the more dramatic, noticeable effects. It delivers a series of repeated magnetic pulses. It is intended to make lasting and permanent changes to brain function, which may last for hours or even weeks after the treatment.
The kind used at Princeton is called single pulse TMS. It only delivers one pulse at a time. It works the same way as rTMS but is much weaker and doesn’t have a lasting effect on the brain. “The only thing that is subjectively noticeable from single pulse TMS at this strength is if I put it over the part of your motor cortex that controls your fingers, I can elicit twitches in specific fingers,” Webb says. The effect is very small and wears off in seconds. Says Webb: “We can time it so that in a trial, a particular few hundred millisecond chunk of time has an effect, but it’s not something that’s going to last even onto the next trial, much less later in the day.” He says that using a stronger machine would actually defeat the purpose of his research because he wants to see how performance on the task is affected from moment to moment.
In order to pinpoint the correct brain area, each participant must first undergo an MRI. Webb doesn’t stimulate the same area in each participant. He uses TMS on the right or left side and in some subjects stimulates a slightly different location to act as a control. That would be a brain area slightly outside of the TPJ that he doesn’t think will affect the experiment. That’s how he accounts for any placebo effect.
The temporoparietal junctions are located in the same general area in everyone, but the exact location varies from person to person. He says it can actually vary by up to a few centimeters, which is a lot. So he needs the MRI data to pinpoint the correct location for each participant so that he can properly calibrate the TMS machine.
During the TMS session the participant wears a head band that has three protruding reflective points. The TMS coil has three corresponding reflective points on it. A camera in the corner of the ceiling tracks the relative positions of the coil and the person’s head. A special computer program calculates where the center of the coil is relative to the part of the brain they want to stimulate.
The biggest safety concern with TMS is the small risk of seizures. “If someone has an underlying tendency toward epilepsy or if it’s known that they have epilepsy then we definitely wouldn’t do TMS on that person,” says Webb. “It’s the sort of thing that if you’re already very likely to have a seizure, this could provide the just the little bit of extra oomph that it needs to go over the edge into causing a seizure. From single pulse TMS at strengths much higher than [our machines] go, there’s one recorded instance of having caused a seizure in someone who is already known to have epilepsy. At these strengths in normal people who don’t have already known cases of epilepsy, there are no actual instances of causing a seizure.”
To be able to use this machine, Webb had extensive training and was certified by the university. So far he has only run one TMS experiment. He has used the machine about 40 to 50 times on about 30 people.
How do people react to the experiment? Do they get nervous? “They usually seem fine about the TMS itself. I would say that the tasks I have participants do are probably a little bit on the boring side,” he says. Webb says the participants are not usually any more nervous about the procedure than they are about getting an fMRI scan for the first time. “The first thing that I always do is I apply a TMS pulse to the participant’s wrist, just so they know what it feels like at the lowest strength possible and then slowly raise it so they understand what they’re getting into, and just make it totally clear that if you want to opt out of this at any point, that’s totally fine,” he says. “I’ve never had that happen. I would say basically, in terms of what it feels like, it feels like just sort of a light flick. At most I would say it’s kind of annoying — but not painful.”
I ask Webb what his friends and family think of his work. “I think people are usually a little bit surprised that you can noninvasively stimulate someone’s brain. But I think that once I describe how weak the effect is, I think it’s maybe less surprising. The effect is at a threshold level so it’s not like mind control or something like that. It’s a very subtle effect. It’s easy for people to get almost a sci-fi impression of it, but I think once you describe it more concretely, it makes more sense.”
Join the Experiment
Want to get involved? Go to
This is an especially good
opportunity for high school students
who want to go to Princeton or become
scientist. Be sure to check the eligibility requirements for each study as many have upper age limits.
You have to be under 35 or 40 for many of them but
not all. Participants for most studies must be right handed and not have any psychological or neurological conditions. Free visitor parking is available.
At the time of our interview Webb is recruiting participants for another TMS experiment he will be running shortly. The pay is $20 per hour. Participants can potentially make $80 to $400 for the whole experiment, depending on the number of times they participate.
Another Monday is upon me. I check my account on the paid experiment website again. Two new studies are posted that I haven’t done. Just my luck, they both have back-to-back time slots for the coming Saturday, perfect! I essentially press buttons on a computer for an hour — and get to meet fascinating people who are on the cutting edge of science. A most unusual way to fund my shopping excursions in downtown Princeton, but for $12 an hour I think it’s a good deal.

Thursday, March 3, 2016

Reconstructing History: 3D Printing at the Intersection of Art, Design, and Science Education

Three dimensional printing is changing the face of science, art, history, and design. As these futuristic machines, which can reproduce almost any physical object, become cheaper and more accessible, creative people find new uses for them every day. Nowhere is this more apparent than in education. Students in two Virginia school districts are using this technology to build functional models of machines, to better understand how they work. They’re now able to do so more quickly and cheaply than ever before.
American Innovations in an Age of Discovery is joint initiative between the Joseph Henry Project for Historical Reconstruction at Princeton University, the Smithsonian Institution, and the Laboratory School for Advanced Manufacturing at the University of Virginia. The goals of the project are to reconstruct important historical inventions and develop an innovative new middle school science curriculum. The Smithsonian is creating an online database of 3D animated artifacts called Smithsonian Invention Kits. This open source software is being used by the Virginia students to build working reconstructions of historical machines. The curriculum is based on Engineering in the Modern World, a Princeton course, developed by professors Michael Littman and David Billington. The program is being piloted at middle schools in the Charlottesville City and the Albemarle County Public School districts.
          The goal for the students is not to simply reproduce exact replicas, but to reinterpret and reinvent the devices with a modern twist. For example, they may change the size or placement of parts in a device to see how that affects its function. With 3D printing, they’re able to easily change the shape and size of each component part, any way they can think of.  Each model is designed to display and teach a specific engineering or physics principle. Students can actually see all the physical principals at work and understand their applications.
“We are essentially bringing the principle to life in a clear   progression. Instead of it being just an equation on a blackboard, it becomes real for the students.”

This original Page Solenoid is an electric motor that was invented in 1854 by Charles Page. Students use photo realistic animated designs to build working replicas of the device.
This original Page Solenoid is an electric motor
was invented in 1854 by Charles Page. Students
use photo realistic animated designs to build working
replicas of this device. Photo source: Smithsonian
3d.si.edu/invention
The Page solenoid motor is used to demonstrate the solenoid effect, which is the way that certain metals are pulled into a magnetic field due to the gradient or fringing of the field. The effect is demonstrated by switching the solenoids on and off, producing continuous motion. Then the motor is used to perform a useful task (i.e lift a weight, turn fan blades, etc.) Students then apply what they’ve learned and build their own 3D printed versions of the device.

          “We are essentially bringing the principle to life in a clear progression. Instead of it being just an equation on a blackboard, it becomes real for the students,” says engineer Luke Stern. For two summers, Luke worked at Princeton University, as a student researcher under Prof Littman, while perusing his engineering degree. He is now the project’s lead animator and designer. He now works from California, where he’s also starting a design business and pursuing his own research in 3D printing and engineering.



 
This is a CAD rendered reinterpretation of the Page motor,
created by Luke Stern ,lead designer and animator on the American Innovations in an Age of Discovery project. The purpose of this particular model is to teach students about the solenoid principle. Picture courtesy of Luke Stern 


He uses computer aided design (CAD) to model the devices before they can be printed. Additionally, he makes 3D animations to be used in lectures. A few of these educational materials have been used in various engineering courses at Princeton University.
            Some of Luke’s designs are simply reinterpretations of existing machines, but he also makes his own original devices. So far, he has developed a 3D printable loudspeaker, a microphone, a gear pump, a drop tower catapult, and a pendulum clock. He is currently in the process of building a website, Princeton Concepts,where plans for all of these things will be available. He hopes to have it online by the end of March 2016.   

                               "To be able to replicate something that easily, which cannot only produce noise, but can sound quite good, I believe, can be very inspiring to young engineers,”
 
Many designs Luke creates are distinctive 3D
printed versions of existing inventions. They are
similar to their historical counterparts, but not
exact replicas. An example is his Galileo Pendulum
which is a scaled down version of the original clock
mechanism invented by astronomer Galileo Galilei.
Picture courtesy of Luke Stern 
One of Luke’s original designs, is a speaker that is fully 3D printable. The particular challenge here was to produce something that both sounds good and uses parts that are made exclusively out of one plastic material. He says it’s difficult to design something that is both easy to print on any machine and also performs well. For example, it took several attempts to perfect the cone, the part of the speaker which vibrates to produce the sound. He had to test many different prototypes before he finally got it right. “For any individual to be able to download the files and print or modify them to their own custom version is remarkable. To be able to replicate something that easily, which cannot only produce noise, but can sound quite good, I believe, can be very inspiring to young engineers,” he said.


 
A fully 3D printable speaker: "One particular advantage of using
3D printing to design and manufacture the speaker is that one can very
easily and quickly experiment with different cone shapes. Different
shapes will give you different mechanical stiffness, which affects the
frequency response of the speaker. This concept could be incorporated
into a very interesting undergraduate engineering exercise," Luke says.
Picture courtesy of Luke Stern
The design process involves plenty of trial and error. For the educational models he begins by video conferencing with professors at Princeton and the University of Virginia. They identity a valuable engineering concept or device they want to develop. Next, Luke thoroughly researches the concept in order to get all the information needed, to make a proper design. He then draws up sketches in his notebook to visualize the best approach for a 3D printed design. Then he begins to model parts on CAD (computer assisted design) software. He’ll sometimes start by printing one subcomponent to make sure it’s workable. Other times, he draws out almost every component in the assembly before prototyping it. It depends on whether or not the device uses all 3D printable parts.
“In my designs, I try to incorporate as many 3D printed parts as I can, because the more 3D printed parts, the easier it will be for anyone with a printer to manufacture.” It usually takes around eight to ten hours to print a device on his printer.
After he has the first printed prototype, he then goes through a trouble shooting process of design changes and reprinting, until he has a device he’s happy with. He also makes videos of the devices and educational animations of them for University of Virginia to use for their school curriculum program.
Animation can be a time consuming process. “When you do photo realistic animation, the render times are outrageous because of the ray tracing that’s required…It’s a very CPU intensive process,” Luke said. Even with a powerful computer, it can still take about two hours to render one second of video.  But it does depend on the complexity of the model and the video resolution. The page solenoid motor animation on the Smithsonian website took 2-3 days to do. He made three different versions of it.

Luke uses a variety of different animation and CAD software to produce the animations. To draw everything, he uses a program called Creo Parametric. This is what he used to create the photo realistic Page motor simulation on the Smithsonian website and the videos on his You Tube channel. He also uses Keyshot to produce photo realistic animations. He does the final video edits in Adobe Premier Pro. 
About 80-95% of each device Luke fabricates is 3D printed, usually from a type of plastic, such as PLA (polylactic acid). It’s possible to print with lots of different materials, including various metals but, Luke prefers this type of plastic. “It’s cheap, ridged, true to form, and biodegradable,” he says. The printed plastic mainly serves as physical scaffolding for a device. Other parts that can’t be printed or made of plastic, such as the magnet wire in the solenoid motor have to be ordered from an outside supplier. Certain parts may also be cut with a die cutter or a laser cutter.   
The educational models usually have a non 3D printed design component incorporated into them, which enables to students to change something about the device. For example, they can change the windings on the solenoid motor to see how that affects its performance. “Many times there is also an integration of standard off the shelf mechanical components to increase the performance of a device. For example, using standard steel pins to reduce the friction in the Galileao Pendulum pivots. “Some of what we do is exploring inexpensive standard components that can be easily added to a 3D printed design.”
“You really haven’t designed something well
 unless your grandmother can make it.”

Luke wants every device he designs, to perform well, while still being fully printable, and easy to replicate. “A good designer or engineer doesn’t just come up with a cool device or component in 3D, he/she should be able to make sure that device can be properly manufactured,” he says. He strives for simplicity. Luke refers to a famous Albert Einstein quote, “You really do not understand something well unless you can explain it to your grandmother.” Luke says, “You really haven’t designed something well unless your grandmother can make it.” 

Below is a sampling of some of Luke's projects. His main You Tube channel is LS Concepts. For More information about American Innovations in an Age of Discovery, go to http://3d.si.edu/invention

CAD rendering of a 3D printable microphone