Taking a similar approach to last year, which was watching videos on the units I missed while practicing with quizzes and reading the AP CSA textbook. So far, this has been going well and it helped me succeed previously, so I will continue to take this approach.
Our team agreed on an idea brought up by Anthony Bazhenov, which was to train a machine learning model to recognize sign language. This model would then be able to interpret what someone is saying through the use of a camera and allow the person to speak through the computer. The goal is for this to be a useful application for those who cannot speak with their voice, as well as to do more research into machine learning as a whole.