This is currently my favorite active project to work on. I started this project because I initially wanted to buy a Google Home to have that whole personal assistant feel but I realized one thing. It wouldn't really be personal. Google Home is built for many people to use to the commands and features are basic so that it can accomodate all those people. I want to build Goddard to be my own very personal assistant that learns about my behavior and can perform tasks for me that aren't things that Google Home can already do.
When I first started working on Goddard I envisioned it as a chatbot for creatives that would take the users input and create a folder full of references for a project. Although, I liked the sound of it a lot of people told me that it sounded like something one could do by just Googling and you'd also have the luxury of picking things YOU actually wanted. WHich did make a lot of sense.
So, at this point I pivoted and began to think more broadly. Instead of just a chatbot for compiling sources let me make one that can understand my work and critique it.
I worked a lot with the Raspberry Pi this past semester and I wanted to incorporate some sort of physical component to Goddard. Ideally I wanted it to be able to unlock by recognizing my face and then using a 5" LCD touch screen I could send commands or read the latest updates on my projects.
However, I ran into a lot of issues in the camera/face detection portion of the code and it halted that part of the project for now. I can code around it but seeing as I want to use this as a learning opportunity I don't want to pass it up.
Goddard is still under a lot of heavy development and I need to take some time out to sit down and sort all the documentation. But, in the meantime here is a sneak mockup of how I would like the voice chat portion of the desktop application to possibly look like.