This week was incredibly hot! Several days we had heatwave warnings. I thought I was in Florida again. We didn’t let the heat hold us back though, visiting different places and even the beach. (Side note: I have found 1 of the 2 Mogu Mogu flavors I had left to try. 7 down, 1 to go.)

Excursions

This weekend Busa, Nadia, and I went down to the city of Gunsan — an industrial city by the sea. We met up with a friend there to celebrate Nadia’s birthday. Gunsan is great!

  1. Exploring Gunsan

On our first day in Gunsan, we visited all the museums within the city. There is a Teddy Bear Museum, a museum of the first bank in Korea, and the Gunsan Museum of Modern History (which had a lot of cool structures you could go into/interact with). We also visited the oldest bakery in Korea, which had delicious pastries. 

Figure 1: Images of some of the places we visited on our first day in Gunsan. Left: us at the Teddy Bear Museum in Gunsan. Right: me in one of the shops at the Gunsan Museum of Modern History. 

We ended the evening going to Gyeongamdong Railroad Town, which is a pretty abandoned railway that has a bunch of shops set up around it. 

Figure 2: Us at Gyeongamdong Railroad Town in Gunsan.

  1. Going to the Beach

The next day, we woke up early and went over to the beach. The beach at Gunsan is really cool, we all collected sea glass and nice rocks there. The water was warmer than the water at Busan’s beaches and wasn’t as deep. It was a lot less crowded as well. I liked this beach a lot!

Figure 3: The gang at the beach in Gunsan.

At the beach, we also rode a zipline! It was from pretty high up and went from one side of the beach to the other. You can see a cool gif of it in Busa’s blog post!

Figure 5: The gang about to go down the zipline in Gunsan.

Overall, I can confidently say that I loved Gunsan! It’s a more quiet town, but definitely worth spending a weekend at. 

Working in the Lab

This week in the lab, we reviewed the results of retraining my object detection algorithm. Luckily, the results were not 0% accurate this time! Unfortunately, I still had only about 30% accuracy. Some items have a detection rate of 60% or more, which is about what I expect at this point. However, a handful of objects have 10% or less, which is bad. So, this week I was tasked with identifying these objects, which I started calling the “problem children,” and seeing why their accuracy is so low. 

I identified 14 problem children, with 19 total objects with an accuracy of less than 20%. The first thing I looked at was the size of the objects in the dataset. If the objects in the training sets are always larger, then the algorithm will have trouble identifying the object if it is smaller in the testing set. So, I did a comparison of each problem child’s average sizes in the training set vs in the testing set. 

Figure 6: Figure shows a table of the sizes of each object in the training set vs the sizes of each object in the testing set. The objects highlighted in green are the “problem children” and the areas marked with yellow/orange/red display how much the testing dataset is lacking a certain size. 

Overall, after comparing the sizes of the objects in the training set vs the testing set, I found that while there are a lot of differences, the overall pattern of sizes matches up. For the testing set, there are more small objects than large objects, thus the training set should reflect that (which it does). Thus, I believe this is not a major reason as to why the accuracy is so low.

The next thing I did was investigate the training data set itself. We trained the algorithm to identify 60 different objects, despite the fact that I only made the dataset for 15 objects. This was because the other 45 objects came from work other students have done before. However, a lot of objects from those 45 had very low accuracy and thus warranted investigation. Upon looking through the data, I found that four of the objects were poorly segmented! I remade the training set for these objects.

Figure 7: Left: an example of an object that had a dataset containing poorly segmented images. As you can see, the top of the bottle and the cap for the object is missing. The right image shows what the object is supposed to look like. 

Following this, I compared the training set to the testing set visually. What I found while doing this is that the training set for a lot of objects contains bright lighting, while the testing sets of the objects are not as well lit. This means the algorithm learns to identify the object only when everything is very well lit. To resolve this problem, I ran code to make the images slightly darker. 

Figure 8: An example of how I made the images darker. The top shows how the bandaid box looked before. The bottom left shows a bandaid box with a dark filter over it. The bottom right shows how the bandaid box looks in the testing set. As you can see, the brightness of the bottom left box matches more with the testing set than the top box. 

Both brighter and darker images are going to be used in the training set, it’s important to make sure the training set covers a diverse set of lighting conditions. 

While comparing the training dataset and the testing dataset, I also found that one object, the Cheez-It box, has a completely different training dataset and testing dataset. It’s a completely different object! I will have to redo that object completely this week.

Figure 9: The testing dataset(left) and the training dataset(right) for the Cheez-It box contain completely different box shapes and designs. They are too visually dissimilar and thus are confusing the algorithm. 

This week I will work on re-doing the dataset for the Cheez-It box. Once I have that completed, I will also need to write some program code to generate images that have a color filter on it. The reason for this is that the testing set contains images with a color filter over it, and thus the algorithm won’t be able to obtain high accuracy until I train it to see objects even if there is a color filter. After that, we will re-train the algorithm and hopefully see greater results this time.

Dr. Moser’s Workshop

This week in Dr. Moser’s workshop, we did faux interviews with each other. I was interviewed by Kervin and later interviewed him back. Kervin started the interview with the classic question, “So, tell me about yourself,” to which I responded with the elevator pitch we have been working on throughout the past month. Following this, Kervin asked me a variety of questions about working as a leader and working in cybersecurity. I did my best to answer confidently and concisely, drawing from my previous experience. I was told I did well, but I could perhaps be more specific in my answers. I will make sure to practice answering more technical questions so I will be able to think up details when answering questions. At the end of the interview, I made sure to ask the interviewer questions as well. 

When I interviewed Kervin, I made sure to ask him questions related to business and leadership, which are both topics he seems interested in. I also tried to make my questions build off of his previous responses. Kervin answered every question very well! Overall this exercise was very helpful in seeing what I am lacking in with my ability to answer interview questions, and what information an interviewer is looking for when they ask you a question. Next week will be Dr. Moser’s final workshop, where we will give a 15-minute presentation about our experience in Korea with this program.