Try   HackMD

w# Meta Lifestyle Accelerator Grant Request

1. Team Overview

Ronald Escalante, Team Lead and Main Developer

Ronald is a versatile technology leader with over 20 years of experience spanning embedded systems, network infrastructure, full-stack development, and virtual reality applications. He excels in leading technical teams to provide comprehensive IT solutions that enhance business management for companies of various sizes. Ronald's expertise ranges from designing and implementing computing systems within budget constraints to developing applications and embedded systems. He specializes in assisting small and medium-sized businesses in optimizing their technological infrastructure, ensuring continuous operation, effectiveness, and innovation as well as creating their own software solutions for internal use.

Background and Experience:

  • Director of Technical Team at Corporación Skalant, S.A. (2009 – Present):
    • Leads a technical team focused on server administration, customer service,
      and convergent data network management, including telephony and video.
    • Helps clients reduce operational costs and improve efficiency by tailoring
      technological solutions to their needs.
    • Develops custom software solutions for their customers. Enhancing their processes and extending the base capabilities of leading existing solutinos such as Microsoft Dynamics and SAP Business One.
  • Cisco Instructor at Instituto Tecnológico de Costa Rica (2001 – Present):
    • Over 23 years of experience in educating future IT professionals.
    • Teaches the Cisco Certified Network Associate (CCNA) program, enriching the
      learning experience with real-life case studies.
  • Project Engineer at Canam Technology, Inc:
    • Designed printed circuit boards for AM and FM radio repeaters with audio override for emergency situations known as MARKIII.
    • Developed the firmware for the MARK series equipment and accessories, enabling the devices to be controlled remotely and ease of automation.
    • This development earned a Patent for the Team at Canam Technology Inc for a System and Method for inserting Break-in Signals in a Communication System.

Technical Skills:

  • Network Infrastructure and Administration:
    • Expertise in server administration and network management, including data,
      telephony, and video networks.
  • Client-Focused Solutions:
    • Skilled in designing cost-effective IT solutions for small and medium-sized
      businesses.
  • Education and Training:
    • Extensive experience in teaching networking technologies and preparing
      students for CCNA certification.
  • Leadership:
    • Proven ability to direct technical teams and manage client relationships
      effectively.
  • Embedded System Design and Development (since 2001):
    • Proficient in developing schematics, printed circuit boards, and firmware for embedded systems.
    • Programming languages: C, C++, and Assembly Language for firmware development.
  • Full Stack Development:
    • Develops web-based applications interfacing with existing systems such as Dynamics 365 and SAP Business One.
    • Programming languages: Proficient in various languages for web development such as c#, javascript as well as Blazor and Entity Framework.
  • Virtual Reality Development:
    • Unity: Develops 2D and VR applications for internal use and training.
    • Unreal Engine: Creates VR applications using C++ and Blueprints.
  • 3D Modeling:
    • Entry-level knowledge in computer 3D modeling using Blender.

Ronald's extensive experience spans network infrastructure, server administration, client-focused IT solutions, embedded systems, full-stack development, and virtual reality applications. His diverse skill set, ranging from hardware design to software development, makes him an invaluable asset to his team. Ronald's background in both practical implementation and education, combined with his expertise in emerging technologies like VR and 3D modeling, uniquely positions him to contribute significantly to the development of mixed reality applications. His comprehensive knowledge ensures that projects will not only meet technical requirements but also push the boundaries of accessibility and innovation for target users.

Jose Pablo Esquivel, Product Manager and and GTM Strategist

José brings over 24 years of experience in IT product management, technology strategy, and cybersecurity. He is passionate about leveraging innovative solutions to drive sustainable growth and bridging the gap between education and technology.

Background and Experience:

  • Technical Product Manager at Cisco (2020 – 2024):
    • Specializes in network infrastructure, cybersecurity, e-learning, and knowledge management.
    • Leads teams on large-scale international projects across Latin America and the Caribbean.
  • Technical Manager – Latin America & Caribbean at Cisco (2007 – 2020):
    • Provided technical support for the Cisco Networking Academy in 25 countries.
    • Supported over 3,500 instructors and 170,000 students annually.
    • Focused on IT training for networking technicians and engineers.
  • Instructor at Instituto Tecnológico de Costa Rica (2000 – 2007):
    • Delivered training in Networking Technician and Professional programs.
    • Taught courses on CCNA, CCNP, Network Security, and Wireless technologies.
    • Trained instructors for Cisco Academies in Central America, Colombia, and
      Venezuela.

Technical Skills:

  • Cybersecurity Expertise: Certified in CCNA Security and CompTIA Security+.
  • Network Infrastructure: Extensive knowledge in network design and management.
  • E-Learning Development: Experienced in creating transformative e-learning and
    distance learning experiences.
  • Multilingual Communication: Fluent in Spanish and English; proficient in
    Portuguese.

José's extensive experience and technical expertise make him a valuable asset to our team.
His background in cybersecurity, network infrastructure, and e-learning, combined with
strategic leadership skills, positions him to contribute significantly to our mixed reality
application aimed at enhancing accessibility and understanding through innovative
technology.

Rafael Campos, Software Developer and Technology Instructor

Rafael is a seasoned technology professional with over 20 years of experience as an engineer, consultant, instructor, developer, and entrepreneur. He specializes in software development, Web3 technologies, zero-knowledge proofs (ZKPs), and mixed reality applications.

Background and Experience:

  • Co-Founder at Vixus Labs (Dec 2023 – Present):
    • Leads an innovative startup in the mixed reality space, aiming to blend real
      and virtual worlds.
    • Focuses on enabling virtual objects and characters to interact physically with the real world.
  • Web3/ZKP Developer at Mina Foundation (Oct 2023 – Apr 2024):
    • Developed a ZKP-enabled mixed reality game called "Hot 'n Cold," enhancing fairness using zero-knowledge proofs.
    • Created zkNotary, a zkOracle for Mina based on TLSNotary, enabling cryptographic proofs of data authenticity.
  • Freelance Web3/ZKP Developer (Sep 2022 – Nov 2023):
    • Specialized in zero-knowledge proofs, blockchain, and cryptography.
    • Participated in hackathons like Scaling Ethereum 2023 and Zero Knowledge Bootcamp.
  • Owner at Altus - LATAM (Apr 2007 – Aug 2022):
    • Co-founded Altus, a Cisco Solution Partner that grew significantly before its acquisition by Cloverhound Inc.
    • Led marketing strategy and developed the Cisco Collaboration practice.
    • Developed Ansible Playbooks and customer-facing chatbots, and customized skills for Cisco's Webex Assistant.
  • Cisco Instructor at Fundación Tecnológica de Costa Rica (2001 – Present):
    • Instructs in the Cisco Networking Academy for the CCNA curriculum.
    • Committed to educating future professionals in networking technologies.

Technical Skills:

  • Software Development:
    • Proficient in programming languages and development frameworks.
    • Advanced studies in Digital Signal Processing (MSc coursework completed, thesis pending)
    • Over one year of professional experience developing mixed reality experiences and applications.
  • Web3 and Zero-Knowledge Proofs:
    • Experienced in blockchain technologies and privacy-enhancing cryptographic protocols.
    • Developed applications leveraging ZKPs for enhanced security and fairness.
  • Mixed Reality Development:
    • Focused on blending real and virtual environments through spatial computing.
    • Aims to revolutionize industries by harnessing mixed reality technologies.
  • Entrepreneurship and Leadership:
    • Successfully co-founded and grew tech startups.
    • Led teams in innovative projects and secured grants and awards.

Rafael's extensive experience in software development, mixed reality, and cutting-edge technologies like Web3 and zero-knowledge proofs makes him a valuable asset to our team.
His entrepreneurial background and passion for innovation position him to contribute significantly to our mixed reality application aimed at enhancing accessibility and understanding through innovative technology.

2. Product Vision

Target Market and Problem Addressed

In our increasingly complex and fast-paced world, a significant portion of the global population faces challenges in perceiving, processing, and interacting with their environment.
These challenges, stemming from various conditions, disabilities, or circumstances, lead to reduced independence, difficulty in social interactions, and limitations in daily activities. The scale of this issue is substantial, as evidenced by recent statistics from the World Health Organization (WHO).

Scale of the Problem:

Visual Impairments:

  • Globally, at least 2.2 billion people have a near or distance vision impairment.

Hearing Impairments:

  • Over 5% of the world's population – or 430 million people – require rehabilitation to address their disabling hearing loss (including 34 million children).
  • By 2050, nearly 2.5 billion people are projected to have some degree of hearing loss, and at least 700 million will require hearing rehabilitation.

Autism Spectrum:

  • About 1 in 100 children globally has autism.
  • The abilities and needs of autistic people vary and can evolve over time, ranging from those who can live independently to those requiring life-long care and support.

Sensory Processing Challenges:

  • Visual impairments: From partial vision loss to complete blindness, affecting the ability to read, navigate, and identify objects.
  • Hearing impairments: Ranging from mild to profound hearing loss, impacting communication and environmental awareness.
  • Auditory processing disorders: Difficulty in understanding or interpreting auditory information, even with normal hearing.
  • Sensory sensitivities: Overwhelming responses to certain sensory inputs, common in conditions like autism or ADHD.

Cognitive Processing Challenges:

  • Learning disabilities: Difficulty in processing written or numerical information.
  • Memory impairments: Challenges in recalling information or following complex
    instructions.
  • Attention deficits: Struggle to focus on relevant information in busy environments.

Language and Communication Barriers:

  • Non-native speakers: Difficulty understanding written or spoken information in a foreign language.
  • Speech impairments: Challenges in verbal communication.
  • Sign language users: Limited accessibility in predominantly spoken language environments.

Situational Limitations:

  • Unfamiliar environments: Tourists or newcomers struggling to navigate or understand local customs.
  • High-stress situations: Difficulty processing information during emergencies or high-pressure scenarios.
  • Multitasking scenarios: Needing hands-free access to information while performing other tasks.

Current Solutions:

Existing assistive technologies often address these challenges in isolation, leading to a fragmented user experience. Many solutions are device-specific, expensive, or require extensive training to use effectively. This leaves a significant portion of the affected population without adequate support.

Proposed Solution and Its Uniqueness

We propose a mixed reality application for the Meta Quest VR headset designed to empower individuals facing various sensory and cognitive challenges, including visual impairments, auditory impairments, and autism spectrum disorders. Our application aims to enhance users' perception and interaction with their environment by leveraging spatial audio cues, visual enhancements, and intuitive interfaces. By translating sensory information into accessible formats, we enable users to identify physical objects, people, and environmental cues with greater independence and confidence. This way the users can interact in a more natural way with those around them thus closing the gap that isolates them from the world.

For individuals with visual impairments, the application converts visual information into
spatial audio cues, assisting them in navigating and understanding their surroundings. For
those with auditory impairments, it translates environmental sounds and speech into visual
alerts, text captions, or pictograms displayed within the headset, facilitating communication and enhancing situational awareness. For individuals on the autism spectrum, the application offers customizable sensory inputs and translates complex auditory or textual information into simplified visual formats, such as pictograms, aiding in comprehension and reducing sensory overload.

Leveraging Meta Quest Technology

The Meta Quest headset provides the ideal platform for our solution, equipped with advanced features such as:

  • Depth Sensors: Capable of mapping the immediate environment, including floors,
    walls, and objects.
  • High-Resolution Stereoscopic RGB Cameras: Capture detailed visual information of
    the surroundings.
  • Stereo Sound Output: Delivers immersive 3D spatial audio, which can be enhanced
    with headphones if needed.

By integrating these technologies, we can create an application that translates visual and
auditory information into intuitive sensory feedback tailored to each user's needs.

Key Features

  1. Object Detection

    Our application will detect the position and size of physical obstacles like walls, furniture, and other objects. To inform users of these obstacles:

    • Visual Impairments: The system generates characteristic spatial sounds emanating
      from the location of each object. Audio cues vary based on object type, size, shape, and proximity. For example, a wall directly in front might produce a low-pitched sound from the front, while a small chair to the left emits a high-pitched sound from that direction.
    • Auditory Impairments and Autism: Visual cues, such as highlighted outlines or icons, appear over detected objects to enhance visibility and comprehension.
  2. People Detection

    Utilizing the camera feed, the application detects the presence of people in the environment:

    • Visual Impairments: Represented by unique sounds associated with human presence. Personalized tunes or melodies can indicate specific individuals, with sounds originating from their spatial locations.
    • Auditory Impairments and Autism: Visual indicators, like name tags or pictograms, appear above detected individuals to assist in identification and social interactions.
  3. Body Language and Gesture Recognition

    The application interprets body language and gestures of nearby individuals:

    • Visual Impairments: Provides auditory cues for gestures like handshakes, pointing, or waving, indicating both the action and the exact position.
    • Autism Spectrum: Simplifies social cues into clear visual representations to aid in understanding and responding appropriately.
  4. Contrast Enhancement for Low Vision Users

    For users with reduced vision, we enhance the visual feed to improve object distinction against backgrounds:

    • Combines depth sensor data with RGB camera input to highlight obstacles and relevant objects.
    • Adjusts contrast, brightness, and color settings to suit individual visual capabilities.
  5. Audio and Speech-to-Text Conversion

    To assist individuals with hearing impairments:

    • Environmental Sounds: Converts sounds like sirens, alarms, or doorbells into visual alerts or symbols within the headset.
    • Speech: Translates spoken language from nearby individuals into text captions or pictograms, facilitating conversations with people who do not know sign language.
  6. Text and Speech-to-Pictogram Conversion

    Understanding that some individuals on the autism spectrum may find written text
    challenging:

    • Simplified Communication: Translates complex auditory or textual information into sequences of pictograms or simple images.
    • Environmental Sounds: Represents sounds (like a ringing phone or barking dog) with corresponding icons.
    • Instructions and Schedules: Converts tasks or schedules into visual step-by-step guides using pictograms.
  7. Gesture-to-Speech Translation (Future Feature)

    An additional future capability includes recognizing predefined hand gestures and translating them into audible speech:

    • For Users with Speech Impairments or Hearing Loss: Allows users to communicate with others who do not understand sign language.
    • Integration with Standard Sign Language Systems: Potentially utilizes systems like ASL for broader accessibility.

Potential to Become a Leading Quest Lifestyle App

Our application seeks to create a more inclusive and accessible world by bridging the gap
between individual perceptual or cognitive abilities and the demands of the environment
using the power of mixed reality technology. By providing personalized, intuitive auditory and visual cues tailored to individual needs, we aim to enhance independence and quality of life for users with various sensory and cognitive challenges. Through user-centered design and continuous feedback, we are committed to developing a solution that meets the real needs of our target audience, aligning with Meta Quest's mission to enrich people's lives through innovative technology. We believe that this application has potential to become a leading Quest lifestyle app.

Business Model

The application will operate on a simple, affordable subscription model designed to make our accessibility features available to all end users:

Single-Tier Subscription

  • Affordable Monthly Fee: Priced to ensure accessibility for individual users
  • Full Feature Access: All accessibility features included
  • Regular Updates: Continuous improvements and new features
  • User Support: Access to customer support

Key Benefits

  • Simplicity: One straightforward plan for all users
  • Accessibility: Low cost ensures the app is available to those who need it most
  • Sustainability: Steady revenue stream to support ongoing development and
    maintenance
  • User-Centric: Focused on individual end-user needs
  • Flexibility: Monthly subscription allows users to opt in or out as needed

This model aligns with our mission to enhance accessibility by providing a powerful, continuously improving tool at an affordable price point. It ensures that all users have access to the full capabilities of the application, supporting greater independence and improved quality of life.

AI Integration Plans

Our mixed reality application for Meta Quest will leverage existing AI models and APIs, such as Meta's Llama, to enhance its ability to assist users with various sensory and cognitive challenges. By utilizing established services, we can focus on integration and user experience while benefiting from state-of-the-art AI capabilities.

By integrating these established AI services and APIs, our application will provide a
sophisticated, adaptive, and personalized assistive experience for users with a wide range of sensory and cognitive challenges. This approach allows us to leverage cutting-edge AI
capabilities while focusing our development efforts on creating a seamless and effective user experience.

Key Benefits of This Approach:

  1. Rapid development and deployment, leveraging well-tested and optimized AI services.
  2. Regular updates and improvements to AI capabilities without requiring extensive in-
    house AI expertise.
  3. Ability to switch or add new AI services as better options become available, ensuring
    the app stays at the forefront of AI technology.
  4. Reduced computational load on the Meta Quest device by offloading intensive AI
    tasks to cloud services where appropriate.

This AI integration strategy will enable our application to effectively convert various inputs
(visual, auditory, or gestural) into the specific cues each user needs, whether that's spatial audio for visually impaired users, visual alerts for those with hearing impairments, or
simplified pictograms for users on the autism spectrum.

3. Development Milestones

Milestone 1: Mechanical Proof of Concept (MPOC) - 3 Months

We have chosen spatial audio implementation and real-time object detection for the MPOC, in order to lay the essential groundwork for the more complex integration of these technologies in the Vertical Slice, where we'll create a comprehensive spatial audio feedback system for obstacle detection.

Spatial Sound Cue System

  1. Perform in-depth research on the Meta XR Audio SDK for Unity as well as the theory behind 3D audio spatialization (HRTFs, distance modeling, etc.).
  2. Build a simple VR experience in Unity that showcases the spatial audio capabilities of the Meta Quest using virtual sound-emitting objects.

Real-Time Object Detection

  1. Perform research on and integrate Meta Quest's depth sensor API for real-time 3D environment scanning.
  2. Perform reasearch on and integrate Meta Quest's camera feed for real-time object detection using an AI Vision model such as Llama 3.2.
  3. Implement sensor fusion from #1 and #2 to design a reliable object detection and classification algorithm, possibly using machine learning or more traditional methods, such as the Kalman filter.

Milestone 2: Vertical Slice - 6 Months

For the vertical slice, we have chosen to focus on one of Percepta's core features: spatial audio feedback for obstacle detection to enable assisted navigation for visually impaired users. This feature and use case exemplify our app's potential to transform how users with visual impairments perceive their environment. We will implement real-time obstacle detection using both the Meta Quest's depth API and RGB cameras. The system will then translate these detected obstacles into intuitive spatial audio cues, allowing users to navigate their surroundings more effectively.

Obstacle-to-Sound translation

  1. Develop a system for prioritizing and filtering detected objects based on relevance and proximity.
  2. Create a low-latency pipeline for translating detected objects into spatial audio cues.
  3. Create a library of distinct sound cues for different types of objects and environmental features.

Testing and Optimization

  1. Conduct initial user testing with visually impaired individuals.
  2. Optimize performance for consistent frame rates and low latency.
  3. Refine spatial audio cue system based on user feedback.
  4. Improve object detection accuracy and speed.

Documentation and Preparation

  1. Create a full product design document detailing all features and future development plans.
  2. Develop user manual and quick start guide.
  3. Prepare demonstration materials for potential investors and partners.
  4. Create a roadmap for future updates and feature additions.
  5. Compile a report on user testing results and planned improvements.

4. Development Progress and Funding

Current Development Status

Our project is currently in its conceptual stage. We have:

  • Completed initial problem analysis
  • Developed a detailed product vision and feature set
  • Assembled a skilled team with relevant expertise
  • Created a comprehensive development roadmap for a full year of work

We are poised to begin active development immediately upon securing funding.

Fundraising Progress to Date

  • Total funds raised to date: $0
  • Current funding sources: None

Total Anticipated Development Cost

Total anticipated development cost: $208,000

This budget covers:

  • Development team salaries for 12 months
  • Hardware and software resources
  • User testing and iteration

Funding Requested from Meta

Total funding requested from Meta: $208,000 (Two hundred eight thousand dollars)