News

News from Korea – July 27, 2021: Elizabeth

This week was incredibly hot! Several days we had heatwave warnings. I thought I was in Florida again. We didn’t let the heat hold us back though, visiting different places and even the beach. (Side note: I have found 1 of the 2 Mogu Mogu flavors I had left to try. 7 down, 1 to go.)

Excursions

This weekend Busa, Nadia, and I went down to the city of Gunsan — an industrial city by the sea. We met up with a friend there to celebrate Nadia’s birthday. Gunsan is great!

  1. Exploring Gunsan

On our first day in Gunsan, we visited all the museums within the city. There is a Teddy Bear Museum, a museum of the first bank in Korea, and the Gunsan Museum of Modern History (which had a lot of cool structures you could go into/interact with). We also visited the oldest bakery in Korea, which had delicious pastries. 

Figure 1: Images of some of the places we visited on our first day in Gunsan. Left: us at the Teddy Bear Museum in Gunsan. Right: me in one of the shops at the Gunsan Museum of Modern History. 

We ended the evening going to Gyeongamdong Railroad Town, which is a pretty abandoned railway that has a bunch of shops set up around it. 

Figure 2: Us at Gyeongamdong Railroad Town in Gunsan.

  1. Going to the Beach

The next day, we woke up early and went over to the beach. The beach at Gunsan is really cool, we all collected sea glass and nice rocks there. The water was warmer than the water at Busan’s beaches and wasn’t as deep. It was a lot less crowded as well. I liked this beach a lot!

Figure 3: The gang at the beach in Gunsan.

At the beach, we also rode a zipline! It was from pretty high up and went from one side of the beach to the other. You can see a cool gif of it in Busa’s blog post!

Figure 5: The gang about to go down the zipline in Gunsan.

Overall, I can confidently say that I loved Gunsan! It’s a more quiet town, but definitely worth spending a weekend at. 

Working in the Lab

This week in the lab, we reviewed the results of retraining my object detection algorithm. Luckily, the results were not 0% accurate this time! Unfortunately, I still had only about 30% accuracy. Some items have a detection rate of 60% or more, which is about what I expect at this point. However, a handful of objects have 10% or less, which is bad. So, this week I was tasked with identifying these objects, which I started calling the “problem children,” and seeing why their accuracy is so low. 

I identified 14 problem children, with 19 total objects with an accuracy of less than 20%. The first thing I looked at was the size of the objects in the dataset. If the objects in the training sets are always larger, then the algorithm will have trouble identifying the object if it is smaller in the testing set. So, I did a comparison of each problem child’s average sizes in the training set vs in the testing set. 

Figure 6: Figure shows a table of the sizes of each object in the training set vs the sizes of each object in the testing set. The objects highlighted in green are the “problem children” and the areas marked with yellow/orange/red display how much the testing dataset is lacking a certain size. 

Overall, after comparing the sizes of the objects in the training set vs the testing set, I found that while there are a lot of differences, the overall pattern of sizes matches up. For the testing set, there are more small objects than large objects, thus the training set should reflect that (which it does). Thus, I believe this is not a major reason as to why the accuracy is so low.

The next thing I did was investigate the training data set itself. We trained the algorithm to identify 60 different objects, despite the fact that I only made the dataset for 15 objects. This was because the other 45 objects came from work other students have done before. However, a lot of objects from those 45 had very low accuracy and thus warranted investigation. Upon looking through the data, I found that four of the objects were poorly segmented! I remade the training set for these objects.

Figure 7: Left: an example of an object that had a dataset containing poorly segmented images. As you can see, the top of the bottle and the cap for the object is missing. The right image shows what the object is supposed to look like. 

Following this, I compared the training set to the testing set visually. What I found while doing this is that the training set for a lot of objects contains bright lighting, while the testing sets of the objects are not as well lit. This means the algorithm learns to identify the object only when everything is very well lit. To resolve this problem, I ran code to make the images slightly darker. 

Figure 8: An example of how I made the images darker. The top shows how the bandaid box looked before. The bottom left shows a bandaid box with a dark filter over it. The bottom right shows how the bandaid box looks in the testing set. As you can see, the brightness of the bottom left box matches more with the testing set than the top box. 

Both brighter and darker images are going to be used in the training set, it’s important to make sure the training set covers a diverse set of lighting conditions. 

While comparing the training dataset and the testing dataset, I also found that one object, the Cheez-It box, has a completely different training dataset and testing dataset. It’s a completely different object! I will have to redo that object completely this week.

Figure 9: The testing dataset(left) and the training dataset(right) for the Cheez-It box contain completely different box shapes and designs. They are too visually dissimilar and thus are confusing the algorithm. 

This week I will work on re-doing the dataset for the Cheez-It box. Once I have that completed, I will also need to write some program code to generate images that have a color filter on it. The reason for this is that the testing set contains images with a color filter over it, and thus the algorithm won’t be able to obtain high accuracy until I train it to see objects even if there is a color filter. After that, we will re-train the algorithm and hopefully see greater results this time.

Dr. Moser’s Workshop

This week in Dr. Moser’s workshop, we did faux interviews with each other. I was interviewed by Kervin and later interviewed him back. Kervin started the interview with the classic question, “So, tell me about yourself,” to which I responded with the elevator pitch we have been working on throughout the past month. Following this, Kervin asked me a variety of questions about working as a leader and working in cybersecurity. I did my best to answer confidently and concisely, drawing from my previous experience. I was told I did well, but I could perhaps be more specific in my answers. I will make sure to practice answering more technical questions so I will be able to think up details when answering questions. At the end of the interview, I made sure to ask the interviewer questions as well. 

When I interviewed Kervin, I made sure to ask him questions related to business and leadership, which are both topics he seems interested in. I also tried to make my questions build off of his previous responses. Kervin answered every question very well! Overall this exercise was very helpful in seeing what I am lacking in with my ability to answer interview questions, and what information an interviewer is looking for when they ask you a question. Next week will be Dr. Moser’s final workshop, where we will give a 15-minute presentation about our experience in Korea with this program. 

News from Korea – July 27, 2021: Ryoma

Outings

Instead of passing straight through Itaewon like last time, Ryoma thought he should try and enjoy the area a bit more. Unfortunately, he discovered that besides the restaurants, cafes, and bars in the area, there actually is not much to do in the area. But, it was a decent start for the day.

A plate of salmon eggs benedict at The Flying Pan, allegedly the best brunch restaurant in Seoul. A good start to the day.

A strange indoor camping exhibit in Itaewon. As Ryoma thought this was a bit interesting, he decided to explore it. It turns out this was the entrance to a outdoor recreation shop. Props to the marketing team.

Ryoma then went to Namsangol Hannok Village, as that was where Google said he should go when he asked what he could do around Itaewon. Naturally, it was nowhere near Itaewon.

A garden by the courtyard at Namsangol Hannok Village.

One of the hannoks (traditional korean houses). This one in particular belonged to Master Carpenter Yi, who worked on the restoration of Gyeongbokgung in the late Joseon Period. Like the other hannoks, it was moved here from other parts of seoul. Unlike the other historical sites he saw so far in Seoul, these houses were furnished, and one could even reserve some time in them to partake in traditional korean culture such as tteok-making, tea ceremonies, and more.

A soju still. Obviously, this drink had been enjoyed by the Koreans for a long time.

An example of a furnished room in the hannok village.

Wanting to try some of Korea’s more exotic fare, Ryoma then headed to Noryangjin Fish Market, a market best described as an aquarium where the exhibits can be eaten. As the fish are alive (with some exceptions), the market did not smell too bad.

Noryangjin Market, Second Floor. Besides fresh sea food, this floor held a dried fish section, a salted fish section, fried seafood, and other restaurants that can prepare some of your catch. In the lower levels, some fare include skates, sharks, salmon, sea bream, and god knows what else.

Sannakji, live octopus tentacles. This dish kills six people a year.

A half-kilo of salt-grilled shrimp, an example of some of the more normal fare the market has to offer.

Of course, the fun times would not last. Ryoma spent the next day in a mad dash trying to get some cash. There was one problem, however. Ryoma’s debit card was locked which required him to call the bank to unlock it. Ryoma was also out of minutes for his phone, which required cash for him to replenish. Ryoma had no cash. This was a problem.

Work

After wasting a week trying to instrument one of his cameras with an IMU and giving up, Ryoma was able to do so this week on the T265 Tracking Camera. With this, Ryoma is able to obtain odometry data, and his robots are fully instrumented for Visual SLAM.

The odometry topic reported from the tracking camera being tracked on RVIZ. The sensor is now capable of observing its position and velocity, making it capable of localization, the other half of SLAM.

Ryoma was also given the gazebo assets for Duckpot, the robot he will implement his algorithms on. These assets were also fully instrumented. Because Gazebo software can run on the same software as the real robot, this move cuts down on development time, as he can use the same software in the virtual environment and in real life.

Duckpod simulated in Gazebo.

Now, all he has to do is to enable 3D mapping and localization using rtabmap. While this is currently eluding him, he is getting closer to getting these algorithms to work.

Communications Workshop

This week’s workshop was dedicated to mock interviews. Here, Ryoma found the delivery of his lines more flawed, as he subjected his poor interviewer to long, rambling stories. Because of this, Dr. Moser had to cut him off. Ryoma wasn’t a very good interviewer, either.