Journal Reflection #5: Insights into your FINAL project

Image from: https://imageio.forbes.com/specials-images/imageserve/5f151e9faa78e50007ce7c76/
The-10-Best-AI-And-Data-Science-Master-s-Courses-For-2021/960×0.jpg

Please use your reply to this blog post to detail the following:

  1. Please give a full description of your final project. Based on your prior work this semester, what made you pick this as your project?
  2. What was your desired learning outcome of your choice of final project?
  3. What has been the most useful aspect of this class? Learning more about Python, GitHub, PyCharm, AI, ML, or …? You decide and please explain why.
  4. Do you feel your work this semester, as summarized by your choice of final project, has helped you better understand some of the foundations of ML and AI?
  5. Do you see yourself pursuing data/analytical sciences coursework once you get to college? Do you anticipate being ahead of some of your classmates thanks to the things you studied this semester?
  6. Include your Github repo URL so your classmates can look at your code.

Take the time to look through the project posts of your classmates. If you saw any project or project descriptions that pique your interest, please reply or respond to their post with feedback. Constructive criticism is allowed, but please keep your comments civil.

THANKS FOR TAKING THE COURSE!!!

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Journal Reflection #5: Insights into your FINAL project

  1. Gil Mebane says:

    1) Oddly enough, we did not pick this project based on prior work this semester but, instead, because we hadn’t done any work with face recognition algorithms and were interested in learning how they worked. Furthermore, we also wanted to create a sign-in and sign-out system to be used for robotics to log attendance. Based on this, we began by following a tutorial on how to make a simple face-detection algorithm (link: https://www.youtube.com/watch?v=5cg_yggtkso&t=291s) (this may not be the correct link, was trying to find the video we used and this was the only one in my watched history). This initial tutorial taught us how to identify faces within a JPG image using the cv2 library and a cascade file from Intel (see file main). Please note that at this step in the process, our code was not being fed a live video feed, but instead using a singular image. What’s more, it was only identifying faces based on traits such as eyes, nose, mouth, the distance between these features, etc… (our code also allowed us to adjust the number of distinctive features we were looking for to make the detection more or less sensitive). Next, we followed another tutorial to create a face recognition algorithm (link: https://www.youtube.com/watch?v=pQvkoaevVMk&list=LL&index=9&t=871s). This time around, we were feeding the algorithm both a reference image and a live camera feed. The result was a program that would tell the user if the person looking at the camera was the same as the reference image (see file PersonalRecognition). Nonetheless, we discovered that the deepface library that this program was using was not very accurate, so we went in search of a new library. Eventually, we stumbled upon the face_recognition library and attempted to follow a tutorial explaining how this library worked and how to implement it (sadly, it was not a very good tutorial, and thus we ended up scrapping this file; see file attendance). Finally, we set about following a tutorial to not only use the face_recognition library to determine the identities of multiple individuals from a live camera feed using multiple reference images, but also to log these people’s attendance in an Excel file and record their time of attendance (link: https://youtu.be/LI111LIahDA?si=FyvKdsd-cma3e6bO) (see file final)
    *Note: a significant amount of debugging went into making each of these algorithms function (that being said, I will spare you the details, but most of our issues stemmed from outdated files/libraries and files needing to be downloaded directly to the computer’s OS instead of the root directory of the project).
    2) Our desired learning outcome was to better understand how face-detection and face-recognition softwares function and are implemented. In the end, we learned that they work by mapping an image to look for distinctly face-like features (such as noses and eyes) and then (essentially just more mapping) they use the distance between facial features and the size of facial features to recognize an individual’s unique face.
    3) While I do think learning to implement many AI and ML algorithms will prove incredibly useful due to the prevalence of such algorithms in recent years, the most useful part of this class for me was learning to truly debug code on my own. Moreover, while I already sort of understood how to debug code I feel I was much too reliant on others for help, and by learning to recognize error messages, utilize Stack Overflow, and use ChatGPT for simpler debugging I have been able to debug errors in code that I would have never been able to do on my own before.
    4) Yes, I feel I have gained a much better understanding of the foundations of ML and AI. Oddly enough, I was talking with my dad and some of his friends at dinner the other day and have now been recruited to make a sentiment analysis tool to analyze their product’s Amazon reviews in real time. With this in mind, I don’t think this would have been something I would have ever been able to do before this course, but now feel that it will be a relatively simple project to complete based on what I learned during my first project of the year about sentiment analysis.
    5) Yes, I foresee myself pursuing data/analytical sciences coursework once I get to college. Before this class, I was a bit worried about how I would fare in a college data/analytical sciences course, as it seemed to me that lots of people on my robotics team were years ahead of me in terms of coding knowledge. Nevertheless, by the end of this course, my worries have pretty much all melted away due to having been able to learn so much about AI and ML in such a relatively short period of time.
    6) https://github.com/GMebane525/FacialRecognition

  2. Anand Jayashankar says:

    1) So, it was a bit of a twisty road to actually get to this project. We knew that we wanted to do something with taking in visual input through a camera, but we were initially thinking of having a program that tells a user whether they should trash, recycle, or compost an item. The thing was, as we went along, we figured it would make sense if we learned how facial recognition worked so we could apply those principles to our recycling code. Understanding how facial recognition worked, and creating code that actually worked, took a whole lot longer than expected. So, our final project ended up being a facial recognition program that is able to every DA Upper School student in real-time by implementing the library Deepface, which is a Lightweight Facial Recognition technology, and then log the time of this detection and the person who was detected’s name onto an Excel Spreadsheet as a method for taking attendence. Gil already wrote a comment with details about some of the projects that we did on the way to build up to the real-time recognition and the links that we used as guides, so I guess I’ll talk a little bit more about some parts of our code which I found interesting. The first thing had to do with facial recognition as a whole and the biases attached to it. I said earlier our code identified every single DA Upper School student, but that isn’t entirely true, we had to remove 2 students from the list because the facial recognition wasn’t identifying these images as people, and both of these students happened to be black. Now, the reason that they werern’t able to get identified is because their hair was somewhat hanging in front of their face, covering up features like the eyes and nose which are key in facial recognition. Still, it was interesting seeing that our system seemed to struggle more with identifying Black people, which seems to be a trend across facial recognition technology. Another interesting thing about this project is that we are hoping to implement it into the Robotics lab as a method of keeping attendance. For the last couple years, attendance has been a huge issue for the Robotics team, so hopefully this will be a concrete solution.

    2) The desired outcome for me was to create something that we could actually apply somewhere on campus. I think I learned a ton by doing this project, and also created an application that should help resolve a long-standing issue for the Robotics team.

    3) I think the most useful part of the course for me was learning to do predictive multivariate regressions with Python. I have already had the opportunity to use this skill multiple times in my Math Modeling class; it just is very easily applicable to real-world scenarios. We’re always trying to figure out how one variable, or multiple variables, affects an outcome, and now I know a succinct and easy way to do that.

    4) I definitely feel like this class has improved my knowledge of AI and ML. Like I said last question, the regression and graphing stuff is going to be really useful for me as I move forward.

    5) Yeah, I’m definitely going to pursue those classes in college. For most of the schools I applied two, I had computer science, data science, and applied math (or some combination of those) as my majors.

    6) https://github.com/anandjss/FacialRecognition.git

Leave a Reply