For my third year design project, I decided to focus on Alzheimer's disease, and try to design a product for sufferers that emphasised the experience design. The project is ongoing and is currently at the stage where most strategic decisions and details have been fleshed out, but more physical prototypes, engineering, CMF, and the service aspects still need to be evaluated and refined.
Alzheimer's is a massive problem, and it would be impossible to try and solve it all. Finding a focus – that is suitable to the context of this project – is therefore key to the success of this endeavour. To find areas to look at, I looked at three possible ways of breaking down the Alzheimer's problem.
Several designers and developers across multiple teams all need access to the design files, and without a centralised location, retrieval was arduous. This led to devs storing poorly named, outdated local exports of files for as long as possible, which led to inconsistent design implementation.
The design team were transitioning into more heavily using Plant as a version control system for their design files, and this did have potential. However, this wasn't a perfect solution either due to reliability issues and constant updates, which often led to issues caused by different people having different versions of Sketch, sometimes disallowing file access until version compatibility was found.
This problem came more into light as the team grew. The designs were initially all being made by a single visual designer, and when more designers were added, new components and style variations were being designed that had already been designed a different way, resulting in multiple inconsistently styled components that were functionally identical.
The current workflow also made handing designs off to developers a lot harder. I found that this was caused by three main things., and also the lack of easy naviga
According to several studies, people with Alzheimer's waste a lot of food due to the food expiring while the owner has forgotten about it.
Not chosen due to somewhat low impact relative to other options, as well as lack of personal interest or excitement about the problem.
Cognitive testing for progression tracking can be a daunting, stressful experience for the patient as they can see themselves declining, or not being able to express their thoughts. This can also be worrying or saddening for the family members if they're present.
Some good aspects, and an interesting challenge, but the chance of being able to solve well with the time and resources available is low.
There is no awareness or campaign to “get checked early for Alzheimer’s”, and no good way to do so anyway. Immediately after diagnosis people are left with no idea what to do next, and social stigma makes it hard to find out or talk to people about it, leaving the sufferer isolated and scared. It is also a slow disease to begin with, so people who know they are at risk worry they might be missing symptoms, but won’t get checked due to time, effort, or even feeling guilty that they are "wasting the doctor's time".
After looking deeper into available and promising technologies, there were some that stood out as plausible within a reasonable timeframe.
The chosen technology was to use retinal imaging through a fundus camera to analyse the blood vasculature present in the macula at the back of the eye, as these vessels were successfully linked to showing signs of Alzheimer's years before symptoms were detected.
Machine learning algorithms on the user's phone analyse the photo and detect any signs of developing Alzheimer's disease. If any red flags show up, the user is informed through the app to consult a doctor, as well as given advice on next steps and resources to look into, as well as online communities to be aware of.
Due to a functional prototype needing to be made, the product needs to be feasible to some extent using today's technology. More details can be found in the full report.
The form of the product needed to be designed with the target user in mind, not only in taste, but also in interaction patterns. To achieve this, form boards were constructed early on based on ethnographic research. These act to set a vision for the goals and were built around four guiding design principles.
An eye test is usually conducted by a trained doctor looking into the patient's eye. In order to replace this with a self test, there were several interactions that needed to be ironed out and seamless for the experience to not cause frustration.
I watched a video of a fundus exam taking place using a somewhat similar retinal scope that used a smartphone as the camera. This exposed some points of interaction difficulty that needed to be solved, the biggest of which was trying to correctly frame the camera
To solve the framing issue, I sketched several ideas very roughly and evaluated them. I then prototyped the best ones with a test user to evaluate them in real world use. Solution implementation shown at the end of the case study.
The product is going to only be as effective as the user's regularity of use, so any help that can be provided to aid that is beneficial.
The only controls present on the camera are a single button to capture a photo, and a switch to reverse the image on the screen to switch between self-assessment and assisted assessment. There is no need for a power button, as it is always ready to take a photo when not charging.
PROPOSED HARDWARE DESIGN
The final design is a simple, easy to use retinal camera. The eyepiece is integrated with the body, and has a subtle curve to aid in comfort. The user simply looks into the camera, follows the on screen instructions, aligns their eye with the dot, and presses the shutter button.
The interface provides a seamless focus and frame experience, providing live feedback on the current and target eye and camera positions. The user simply tilts the camera until the two are concentric, leading to consistently accurate photos.
Try it out below by moving your mouse over the image to see the effect of tilting the camera.
When the user is due for another scan, the device simply extends its camera button, tipping itself in the process. The visually unbalanced, off kilter product is now just frustrating enough to incentivise the user to "fix" the issue by pushing the camera button down again by performing a scan.
The induction charger has a raised bump that aligns with the depressed shape of the shutter button, allowing for easier alignment, while also adding some stability, even when tilted.
I was honoured to hear that Insight won the RSA Student Design Award Brief AI 100, sponsored by Philips.
This project is a work in progress. The digital design for this project is coming soon, as are further refinements to the hardware design. I'd be happy to chat about them and can provide more if requested.
There were several points of interest in this project that allowed me to gain new experience or apply knowledge to create something new that I hadn't expected at the start. The following are a few that I've drawn out:
As part of the feasibility assessment, I conducted experiments to test the possibility of detecting changes in the retinal vasculature by using existing fundus image datasets. The sample sizes were relatively small, but the model was able to detect whether an eye was healthy or not, and was able to discern between diseases.
Due to the open nature of the design brief, I constructed a new framework for creative problem solving that is tailored to the way that my mind works, as I am quite an analytical thinker. It also helped me manage the project timing estimates and prioritisation at any given point, so was invaluable to keeping things on track.
It is based on an expanding set of statements that diverge into a tree structure, and I'm still refining it to be more helpful especially for teams. (I'm currently dogfooding it on my project with Ford – case study coming soon.). If you'd like to hear more about it or this project, feel free to contact me.
While the project went well in general, there were definitely some improvements that could have been made that I have learned from.
One of the biggest opportunities of improvement is I think the fact that I should have started prototyping with simple mockups much earlier on. The mantra of Fail Fast and Fail Often has always been how I work, but I seem to have been less comfortable doing so in this project, potentially due to more concentration on defining scope and strategy as well as emotional experience, rather than the minutia of ergonomic details etc. More prototypes are being made currently to nail down those aspects of the project
I improved in several key areas during this project, including:
This was the first real time that I've had to work on a project as expansive as this one, and the responsibilities of defining scope, structure, and doing the design work was definitely a great learning experience. To tackle this, I designed a framework (mentioned in project highlights) that made it easier to manage, and I feel that I have also become a bit more accurate when estimating time for particular tasks.
I did not have as much real project experience as I would have liked with designing for people in minority groups, such as those with disabillities and the elderly, but I definitely learned a ton on this project. I'm by no means perfect at it, but my confidence in the skill has increased.