Copy link to clipboard
Copied
I am looking to create a language learning AR project where users can scan QR codes, images, or objects within one project space to get an interactive image of the word and sound of the pronunciation. The sounds and images are easily created at the moment but applying multiple anchors into a project seems to be impossible at this time.
This would be a game changer for the work that I do with people with diverse learning needs.
Please could this be implemented!
Have something to add?