data-146-page

View the Project on GitHub ecwydra/data-146-page

ML Summary and Analysis

      I really enjoyed this talk! It had the perfect mix of topics I know well, topics I’m currently working to understand, and topics I haven’t learned yet but would love to know more about! In our introductory data science class we’ve spent the last month and a half discussing models, and foundational data science terms. As I mentioned earlier, Claire’s research presentation showcased how essential it is to understand the fundamentals of data science vocabulary. She discussed two different presentations, and in the first one (spotify data I believe) she used the feature-observation language to describe the data. She also talked about doing a train test split with the same sonic data in order to train a model. These are both technical concepts that we’ve been learning about, and putting to practical use, in our introductory data science class.

      Something else she said that’s been mentioned multiple times in my machine learning class was when she was discussing freezing the layers. She had three examples of ways to freeze and train parts of a model, and ended that slide by saying that what a researcher might decide to do to their data is dependent on what their data looks like, and what they’re studying. This is a sentence I’ve heard Professor Vasiliu say multiple times. Each problem, and dataset, has its own unique challenges, so it’s not logical to approach each problem in the same exact way. I appreciated that Claire mentioned this in her research because it shows a practical application of what Professor Vasiliu has been teaching us.

      As for the topics she mentioned that I’m currently learning, it was interesting to see the high level description of a neural network at the beginning of her talk. Seeing the input layer, hidden layer, and output layer helped to reinforce what I’m learning in my applied machine learning class. We just started covering ensemble learning, so when Claire discussed the three models in the Road Runner Research, I started to get a picture of what ensemble learning looks like in practice. A new point that I hadn’t heard before was when Claire did the audience interaction piece. I didn’t know that models could learn to “trust” certain predictors more over time, and therefore grant their “opinion” more weight. It makes sense that this is a thing that would happen, and learning about that was the highlight of the talk for me.

      In summary, I’m really thankful I was able to attend this research presentation. It was an excellent opportunity to reinforce foundational knowledge, stretch my brain by seeing the practical application of topics I’m learning now, and getting excited about all the ML things I’ve yet to experience.