0

arXiv:2503.11677v3 Announce Type: replace
Abstract: Objective. Patients implanted with the PRIMA photovoltaic subretinal prosthesis in geographic atrophy report form vision with the average acuity matching the 100um pixel size. Although this remarkable outcome enables them to read and write, they report difficulty with perceiving faces. Despite the pixelated stimulation, patients see smooth patterns rather than dots. We present a novel, non-pixelated algorithm for simulating prosthetic vision, compare its predictions to clinical outcomes, and describe computer vision and machine learning (ML) methods to improve face representation. Approach. Our simulation algorithm (ProViSim) integrates a spatial resolution filter based on sampling density limited by the pixel pitch and a contrast filter representing reduced contrast sensitivity of prosthetic vision. Patterns of Landolt C and human faces created using this simulator are compared to reports from actual PRIMA users. To recover the facial features lost in prosthetic vision due to limited resolution or contrast, we apply an ML facial landmarking model, as well as contrast-adjusting tone curves to the image prior to its projection onto the photovoltaic retinal implant. Main results. Prosthetic vision simulated using the above algorithm matches the letter acuity observed in clinical studies, as well as the patients' descriptions of perceived facial features. Applying the inversed contrast filter to images prior to projection onto the implant and accentuating the facial features using an ML facial landmarking model helps preserve the contrast in prosthetic vision, improves emotion recognition and reduces the response time. Significance. Spatial and contrast constraints of prosthetic vision limit the resolvable features and degrade natural images. ML based methods and contrast adjustments prior to image projection onto the implant mitigate some limitations and improve face representation.