This was our last class of the semester and in this class we all got one to one feedback from our lecture. When it was my turn I talked to my lecture through my prototype and showed how a user would navigate using the AR lens. He was happy with the look of things, although He suggested that I make some changes to my logo by making the text “Horizon” big so that it would be the same length as the curve below it; this feedback was very helpful, as seen in the image below I followed his advice and I very much like the new look of the logo.
He also said that I should ask myself “Could it be for physically disabled people?” to which I said that my smart home AR has a wide target audience and isn't aimed at a certain demographic. The problem I’m aiming to solve here is combability and connectivity issues that I uncovered through my research, thanks to the incorporation of the matter system and local connections that will allow my smart home system to connect to third party devices and run automations without the need of internet.
Besides this I also mentioned that the AR would be controlled via gesture controls like eye tracking, glancing, and blinking. When I first presented my idea to my tutor I had mentioned the incorporation of the Neurolink but upon reflecting on it I realised that it would be quite unethical for many users, not everyone would want a device implanted onto their brain hence I went for a simpler and already existing approach.
During my feedback I mentioned that I wanted to do a prototype where lets say if a user looked at a curtain it would highlifgt the “Blinds” on the left side of the screen, this would serve to let the user know that the thing they’re looking at can be controlled. But during my development I came to realise that I was going to be using the same background for al the screens and that this idea was futile, hence I abandoned that idea.
https://www.figma.com/design/BgKY5jTcONPWiDkESHvfjQ/IXD304?node-id=48-16&t=sKW5Xehm1oYB2nVk-1