Mechanical Arm BCI
Scientific Initiation Project
Introduction
What does it mean to move, to reach, to act—when those abilities are no longer accessible? For millions of individuals with motor impairments, the loss of physical autonomy extends far beyond biology; it reshapes identity, independence, and one’s ability to participate in the world. In a society increasingly structured around interaction—interfaces, gestures, automation—the absence of that ability becomes more than personal: it becomes systemic. This project began with a challenge both human and technical: could we design a prosthesis that was not only functional but radically accessible? Could we create a system built not for the market, but for the person—engineered through empathy, and shaped by necessity rather than novelty?
The result is a working proof of concept: an electromyographic (EMG)-controlled anthropomorphic mechanical arm, developed using open-source tools, low-cost fabrication methods, and embedded systems that transform muscle intention into real-time physical response. More than a technical integration, it is an answer to an ethical question: who gets to interact, and on what terms? Drawing from Brain-Computer Interface (BCI) principles, this project translates biological expression—electrical signals from the forearm—into functional prosthetic motion. It reframes the prosthesis not as a high-tech replica of a limb, but as a bridge between lost function and regained agency. The signal—once silent—is here made audible again, not through voice, but through movement. The prototype demonstrated reliable real-time gesture translation under low-noise EMG signal conditions, offering a functional, low-barrier alternative to commercial systems.
At its core, the innovation lies in what I call inclusive minimalism—an engineering approach that strips away complexity, exclusivity, and cost without reducing capability. By grounding the control interface in surface-acquired EMG signals—low-noise, inexpensive, and biologically universal—this project proposes a scalable prosthetic logic that avoids reliance on invasive sensors or prohibitively costly systems. Its architecture was intentionally built to be modular, replicable, and modifiable—using 3D-printed parts, Arduino-based logic, and a streamlined control framework that can be adopted by students, clinicians, and researchers alike. Beyond engineering, it is a philosophy: inclusion is not an optimization, but a starting point. In regions like Brazil, where many face barriers to advanced assistive technology due to cost or limited public access, this model demonstrates how open systems can decentralize innovation and empower local adaptation.
Too often, assistive technologies are created without the involvement of the communities they intend to serve—locked within institutional ecosystems, priced beyond reach, or designed through a lens of able-bodied assumptions. According to the World Health Organization, more than 2.5 billion people currently need at least one form of assistive technology, yet the vast majority—especially in low- and middle-income countries—lack access due to cost, stigma, and systemic neglect. These are not individual problems; they are structural failures. This work stands in opposition to that pattern. It aligns with global movements in open prosthetics, design justice, and decentralized hardware—such as Limbitless Solutions, Open Bionics, and the WHO Global Report on Assistive Technology—where innovation is reclaimed by those most in need of it. Imagine a teenager in São Paulo, unable to access commercial prosthetics, now assembling and customizing this arm in a public-school lab. As development continues, the system will incorporate real user feedback, ensuring co-design becomes part of the platform itself—guiding the future of functionality through lived experience.
As a research platform, this project also contributes to the future of neuroadaptive systems and inclusive robotics. Its design allows for the integration of machine learning models to refine gesture classification, user-specific calibration, and adaptive feedback systems. The project bridges mechanical design, bioelectric signal processing, and embedded software—providing a testbed for future explorations in smart prosthetics, sensor fusion, and human-centered AI. It reflects not only a commitment to functionality, but a scalable model of interdisciplinary, ethics-driven innovation in the service of agency. This is a prototype—but also a provocation, a research model ready to be tested, improved, and shared through academic collaboration.
This journey has transformed my understanding of engineering—not as a set of technical challenges to overcome, but as a medium through which empathy can become structure, and intention can become code. I see this project as a prototype not only of technology, but of a different kind of system: one where neuroscience, ethics, and systems design converge to serve human dignity. And I imagine a future where someone—perhaps a child in a rural clinic or an adult rebuilding motion—raises their hand not with struggle, but with ease. Not as a miracle of machinery, but as a quiet reminder that autonomy, when engineered with intention, becomes beautifully ordinary. Because in that moment, the signal is not just detected—it is recognized, amplified, and allowed to move the world.

Construction
To transform the conceptual foundations of this project into a working prototype, the construction phase served as a space where multidisciplinary insight became tangible form. Bridging neuroscience, prosthetic theory, and mechanical design, the goal was to create a system capable of interpreting biological signals and translating them into controlled physical motion. This required not only a platform for signal acquisition, but a responsive, modular, human-centered structure capable of reflecting movement with intention. In this context, construction became a form of research—a way to manifest agency through engineered embodiment.
The physical components of the prosthesis were created using additive manufacturing (3D printing), enabling rapid iteration, low-cost production, and high structural adaptability. Each part of the arm was modeled to mimic essential human articulation while accommodating the embedded electronic systems. Servo motors were mounted into pre-designed cavities; wiring and signal paths were integrated into the form factor; joints were structured for balance between weight, resistance, and range of motion. The decision to use open-source design tools and consumer-accessible printers was strategic: it allowed for a prosthetic prototype that could be built without industrial infrastructure, reducing production cost and enabling global replication. In this way, the construction method was as inclusive as the system itself.
3D printing did not simply enable the parts—it redefined the scope of who could participate in building them. By adopting fabrication processes that are accessible to schools, research labs, and maker spaces, the project dismantled a central barrier in prosthetic design: exclusivity. Students could build, iterate, and test physical components without needing advanced equipment or funding. This construction logic aligns with the values of open hardware and design justice, offering a system that is as instructive as it is functional. It shifts the question from “Who can afford assistive technology?” to “Who can build it, share it, and improve it?”
In this phase, the act of construction was more than mechanical—it was a gesture of reclamation. To shape a prosthetic arm from open-source software, low-cost materials, and bioelectrical purpose is to propose that autonomy can be democratized. The result is not only a tool, but a blueprint: a system that teaches as it operates, replicates as it evolves, and redefines construction not as manufacturing alone, but as the architecture of inclusion.

Development
Upon completing the physical construction of the prosthetic arm, the project transitioned to its central challenge: engineering a system capable of interpreting human neuromuscular intention through real-time bioelectrical signals. This required building a complex and sensitive signal interface grounded in electromyography (EMG)—a technique that enables non-invasive access to the electrical activity of the somatic nervous system, particularly the forearm musculature. This region, chosen for its role in expressive hand gestures and its anatomical accessibility, became the neural entry point for controlling a mechanical body.
Using the Myo armband, the system captured electrical potentials from muscle fiber depolarization events, producing analog waveforms representative of motor unit activation. These signals—known as muscle fiber action potentials (MFAPs)—are shaped by a variety of physiological and anatomical factors: electrode placement, tissue impedance, skin thickness, and the fiber composition (slow vs. fast twitch) of the targeted muscle group. To maximize signal fidelity, electrodes were positioned in accordance with SENIAM standards, ensuring alignment with motor points and perpendicularity to fiber orientation.
The signal acquisition pipeline followed a multi-stage architecture. First, a band-stop filter attenuated 60Hz environmental noise, followed by a band-pass filter isolating the 20–250Hz EMG spectrum, which encompasses both slow and fast-twitch activation. Once digitized through analog-to-digital (A/D) conversion, the waveform was processed using RMS (Root Mean Square) and iEMG methods to quantify signal amplitude and energy over time windows. This allowed the system to distinguish between distinct gestures based on contraction intensity, timing, and waveform morphology. Movement classification was then achieved through gesture mapping, where specific muscle signatures triggered deterministic commands to the Arduino microcontroller. This controller, in turn, activated torque motors embedded in the prosthetic, driving its articulated joints with precise motion logic.
This electrical-to-mechanical loop formed a closed biomechatronic system—a network where volitional muscle contraction, signal decoding, and robotic execution coexisted. By integrating motor unit physiology with real-time embedded computation, the system did not merely emulate motion—it simulated muscular intent, recreating a logic of effort, relaxation, and resistance. The embedded tendon structures, fabricated from high-strength braided wire and calibrated using spring-loaded tensioning mechanisms, allowed the robotic arm to mirror the dynamics of organic musculature, including degrees of freedom for individual finger control.
While initial gesture recognition was limited to predefined hand movements, the system architecture was designed for extensibility. Further research will focus on adaptive gesture learning, integrating machine learning models to account for signal variation caused by muscle fatigue, electrode displacement, or anatomical differences across users. Likewise, the inclusion of haptic or visual feedback loops—via LEDs, vibration motors, or proprioceptive resistance—may convert this interface into a fully bidirectional channel. In this evolution, the prosthesis ceases to be an endpoint; it becomes an adaptive extension of the user’s nervous system.
This development phase not only demonstrated the technical feasibility of decoding muscular intention, but also reframed assistive devices as fluid, expressive instruments of autonomy. Through every contraction translated into movement, the system restored something deeper than control—it reestablished a language of interaction between the human body and the engineered world.

Projects
The culmination of this research was the development of a functional prototype: an anthropomorphic mechanical arm capable of transforming neuromuscular intention into controlled motion. Designed to simulate voluntary human movement through surface-level electromyographic (EMG) signals, the system stands as a technical and philosophical manifestation of an inclusive design vision—one that fuses brain-computer interface (BCI) theory with low-cost fabrication and open-hardware ethics. This prototype was not merely constructed; it was conceived as a research instrument, a bridge between neurophysiological signal processing and the embodied execution of intent.
At its core, the system integrates the Myo armband as a non-invasive EMG acquisition interface. Forearm muscle activity is detected, pre-processed, and translated by an Arduino-based control unit into discrete commands for servo actuation. Mechanically, the design employs a tendon-driven framework using braided cables and spring-return mechanisms to approximate the dynamics of human musculature—enabling fundamental movements such as flexion, grip, and rotation. All structural components were fabricated using 3D printing and adapted from open-source prosthetic models, emphasizing replicability, modularity, and cost-efficiency. The result is not a finished assistive device, but a platform for experimentation, modular by design and accessible by principle.
Testing focused on evaluating the fidelity of signal-to-motion translation under controlled conditions. While the system successfully triggered certain servo responses through defined gestures, limitations emerged in gesture discrimination, signal stability, and real-time responsiveness. Electromechanical inconsistencies were compounded by physical degradation in the tendon system, particularly due to friction-induced wear. These outcomes did not diminish the project’s value; rather, they revealed the boundaries of current low-cost BCI integration—boundaries that now inform future design considerations.
The project followed a structured, inquiry-driven development arc: from neurophysiological signal analysis to digital interpretation and embodied mechanical expression. Although the current implementation lacks adaptive control or continuous feedback, it lays the groundwork for more sophisticated interaction models. As a proof of concept, the arm functioned as a research probe, surfacing key design tensions between biological variability, hardware constraints, and signal resolution. More importantly, it catalyzed a shift in approach—from building a device to exploring what it means to engineer for agency.
This work does not conclude with resolution, but with orientation: toward systems that learn, interfaces that adapt, and technologies that restore. Its value lies not in perfection, but in provocation. In doing so, the prototype becomes more than a machine—it becomes a question, embodied: How might we build systems that don’t just respond to the body, but listen to it?

Concept proof
Future Directions
Designing Toward What Comes Next
This project was never designed to deliver a conclusion. It was built to open a frontier. Its value lies not in the completeness of its execution, but in the clarity of the questions it leaves behind—questions about control, perception, autonomy, and the ethics of assistive design. In working at the intersection of neurotechnology and affordability, this prototype embraced, rather than circumvented, complexity. Its mechanical limitations, signal instabilities, and intermittent responsiveness were not shortcomings—they were revelations. Each constraint exposed the hidden boundaries of current systems and signaled where innovation must next emerge. In this sense, the arm became not only a functional artifact, but a critical lens—through which the relationship between human intention and embodied technology could be studied, disrupted, and reimagined.
Future development will focus on expanding the expressive range of the system: increasing its gestural vocabulary, refining its classification models, and reengineering its tendon architecture to reduce friction and extend mechanical durability. Crucially, the interface must evolve from unidirectional execution to mutual adaptation. This will require the integration of multimodal feedback—haptic, visual, proprioceptive—capable of restoring sensory reciprocity within the loop of interaction. Modularity and personalization will remain essential, enabling the platform to adapt not only to anatomical diversity, but to the emotional and cognitive particularities of each user. In short, the next generation of this system must become not only smarter, but more humane.
Yet beyond technical refinement, this research aspires to something deeper: participation in a larger, more urgent conversation. What does it mean to create technologies that amplify agency rather than automate dependence? How might the ethics of inclusion be inscribed into hardware and code? This project stands in alignment with an emerging movement—toward open-source, neuroadaptive tools rooted in equity, and toward a model of innovation where access is not an afterthought, but a foundation. It imagines a future in which intelligence is measured not only in algorithms, but in empathy. Not only in precision, but in presence.
What began as an experimental build has evolved into a philosophical stance: that research, at its best, is a method of liberation. The most meaningful technologies are not those that mimic the body with perfect fidelity, but those that respond to its asymmetries—its imperfections, its interruptions, its will. In this light, the next stage of this work is not simply to optimize machines, but to interrogate the systems that shape them—and to design for a world where function and dignity are no longer in tension, but in dialogue.
This project also marks the beginning of a long-term intellectual commitment. My next steps are oriented toward doctoral research, where I intend to explore the convergence of neural interfaces, adaptive systems, and artificial intelligence as a new architecture for human-machine interaction. I am especially drawn to the frontier between biosignal processing and intelligent control: how learning models can transform static input into dynamic, personalized assistance; how a prosthetic can become a partner. Future investigations will center on multimodal signal fusion, ethical AI design, and prototyping methods that expand both access and expression. Through interdisciplinary research grounded in engineering, design, and justice, I hope to contribute to systems that do more than extend capability—they affirm identity, agency, and the right to be heard in motion.