Undulatory swimming appears complex—though not to an eel robot
Lots of robots use bioinspiration in their design. Humanoids, quadrupeds, snake robots—if an animal has figured out a clever way of doing something, odds are there’s a robot that’s tried to duplicate it. But animals are often just a little too clever for the robots that we build that try to mimic them, which is why researchers at Swiss Federal Institute of Technology Lausanne in Switzerland (EPFL) are using robots to learn about how animals themselves do what they do. In a paper published today in Science Robotics, roboticists from EPFL’s Biorobotics Laboratory introduce a robotic eel that leverages sensory feedback from the water it swims through to coordinate its motion without the need for central control, suggesting a path towards simpler, more robust mobile robots.
The robotic eel—called AgnathaX—is a descendant of AmphiBot, which has been swimming around at EPFL for something like two decades. AmphiBot’s elegant motion in the water has come from the equivalent what are called central pattern generators (CPGs), which are sequences of neural circuits (the biological kind) that generate the sort of rhythms that you see in eel-like animals that rely on oscillations to move. It’s possible to replicate these biological circuits using newfangled electronic circuits and software, leading to the same kind of smooth (albeit robotic) motion in AmphiBot.
Biological researchers had pretty much decided that CPGs explained the extent of wiggly animal motion, until it was discovered you can chop an eel’s spinal cord in half, and it’ll somehow maintain its coordinated undulatory swimming performance. Which is kinda nuts, right? Obviously, something else must be going on, but trying to futz with eels to figure out exactly what it was isn’t, I would guess, pleasant for either researchers or their test subjects, which is where the robots come in. We can’t make robotic eels that are exactly like the real thing, but we can duplicate some of their sensing and control systems well enough to understand how they do what they do.
AgnathaX exhibits the same smooth motions as the original version of AmphiBot, but it does so without having to rely on centralized programming that would be the equivalent of a biological CPG. Instead, it uses skin sensors that can detect pressure changes in the water around it, a feature also found on actual eels. By hooking these pressure sensors up to AgnathaX’s motorized segments, the robot can generate swimming motions even if its segments aren’t connected with each other—without a centralized nervous system, in other words. This spontaneous syncing up of disconnected moving elements is called entrainment, and the best demo of it that I’ve seen is this one, using metronomes:
The reason why this isn’t just neat but also useful is that it provides a secondary method of control for robots. If the centralized control system of your swimming robot gets busted, you can rely on this water pressure-mediated local control to generate a swimming motion. And there are applications for modular robots as well, since you can potentially create a swimming robot out of a bunch of different physically connected modules that don’t even have to talk to each other.
For more details, we spoke with Robin Thandiackal and Kamilo Melo at EPFL, first authors on the Science Robotics paper.
IEEE Spectrum: Why do you need a robot to do this kind of research?
Thandiackal and Melo: From a more general perspective, with this kind of research we learn and understand how a system works by building it. This then allows us to modify and investigate the different components and understand their contribution to the system as a whole.
In a more specific context, it is difficult to separate the different components of the nervous system with respect to locomotion within a live animal. The central components are especially difficult to remove, and this is where a robot or also a simulated model becomes useful. We used both in our study. The robot has the unique advantage of using it within the real physics of the water, whereas these dynamics are approximated in simulation. However, we are confident in our simulations too because we validated them against the robot.
How is the robot model likely to be different from real animals? What can’t you figure out using the robot, and how much could the robot be upgraded to fill that gap?
Thandiackal and Melo: The robot is by no means an exact copy of a real animal, only a first approximation. Instead, from observing and previous knowledge of real animals, we were able to create a mathematical representation of the neuromechanical control in real animals, and we implemented this mathematical representation of locomotion control on the robot to create a model. As the robot interacts with the real physics of undulatory swimming, we put a great effort in informing our design with the morphological and physiological characteristics of the real animal. This for example accounts for the scaling, the morphology and aspect ratio of the robot with respect to undulatory animals, and the muscle model that we used to approximately represent the viscoelastic characteristics of real muscles with a rotational joint.
Upgrading the robot is not going to be making it more “biological.” Again, the robot is part of the model, not a copy of the real biology. For the sake of this project, the robot was sufficient, and only a few things were missing in our design. You can even add other types of sensors and use the same robot base. However, if we would like to improve our robot for the future, it would be interesting to collect other fluid information like the surrounding fluid speed simultaneously with the force sensing, or to measure hydrodynamic pressure directly. Finally, we aim to test our model of undulatory swimming using a robot with three-dimensional capabilities, something which we are currently working on.
Upgrading the robot is not going to be making it more “biological.” The robot is part of the model, not a copy of the real biology.
What aspects of the function of a nervous system to generate undulatory motion in water aren’t redundant with the force feedback from motion that you describe?
Thandiackal and Melo: Apart from the generation of oscillations and intersegmental coupling, which we found can be redundantly generated by the force feedback, the central nervous system still provides unique higher level commands like steering to regulate swimming direction. These commands typically originate in the brain (supraspinal) and are at the same time influenced by sensory signals. In many fish the lateral line organ, which directly connects to the brain, helps to inform the brain, e.g., to maintain position (rheotaxis) under variable flow conditions.
How can this work lead to robots that are more resilient?
Thandiackal and Melo: Robots that have our complete control architecture, with both peripheral and central components, are remarkably fault-tolerant and robust against damage in their sensors, communication buses, and control circuits. In principle, the robot should have the same fault-tolerance as demonstrated in simulation, with the ability to swim despite missing sensors, broken communication bus, or broken local microcontroller. Our control architecture offers very graceful degradation of swimming ability (as opposed to catastrophic failure).
Why is this discovery potentially important for modular robots?
Thandiackal and Melo: We showed that undulatory swimming can emerge in a self-organized manner by incorporating local force feedback without explicit communication between modules. In principle, we could create swimming robots of different sizes by simply attaching independent modules in a chain (e.g., without a communication bus between them). This can be useful for the design of modular swimming units with a high degree of reconfigurability and robustness, e.g. for search and rescue missions or environmental monitoring. Furthermore, the custom-designed sensing units provide a new way of accurate force sensing in water along the entirety of the body. We therefore hope that such units can help swimming robots to navigate through flow perturbations and enable advanced maneuvers in unsteady flows.
Evan Ackerman is a senior editor at IEEE Spectrum. Since 2007, he has written over 6,000 articles on robotics and technology. He has a degree in Martian geology and is excellent at playing bagpipes.
“Ship tracks” over the ocean reveal a new strategy to fight climate change
An effervescent nozzle sprays tiny droplets of saltwater inside the team’s testing tent.
As we confront the enormous challenge of climate change, we should take inspiration from even the most unlikely sources. Take, for example, the tens of thousands of fossil-fueled ships that chug across the ocean, spewing plumes of pollutants that contribute to acid rain, ozone depletion, respiratory ailments, and global warming.
The particles produced by these ship emissions can also create brighter clouds, which in turn can produce a cooling effect via processes that occur naturally in our atmosphere. What if we could achieve this cooling effect without simultaneously releasing the greenhouse gases and toxic pollutants that ships emit? That’s the question the Marine Cloud Brightening (MCB) Project intends to answer.
Scientists have known for decades that the particulate emissions from ships can have a dramatic effect on low-lying stratocumulus clouds above the ocean. In satellite images, parts of the Earth’s oceans are streaked with bright white strips of clouds that correspond to shipping lanes. These artificially brightened clouds are a result of the tiny particles produced by the ships, and they reflect more sunlight back to space than unperturbed clouds do, and much more than the dark blue ocean underneath. Since these “ship tracks” block some of the sun’s energy from reaching Earth’s surface, they prevent some of the warming that would otherwise occur.
The formation of ship tracks is governed by the same basic principles behind all cloud formation. Clouds naturally appear when the relative humidity exceeds 100 percent, initiating condensation in the atmosphere. Individual cloud droplets form around microscopic particles called cloud condensation nuclei (CCN). Generally speaking, an increase in CCN increases the number of cloud droplets while reducing their size. Through a phenomenon known as the Twomey effect, this high concentration of droplets boosts the clouds’ reflectivity (also called albedo). Sources of CCN include aerosols like dust, pollen, soot, and even bacteria, along with man-made pollution from factories and ships. Over remote parts of the ocean, most CCN are of natural origin and include sea salt from crashing ocean waves.
Satellite imagery shows “ship tracks” over the ocean: bright clouds that form because of particles spewed out by ships.Jeff Schmaltz/MODIS Rapid Response Team/GSFC/NASA
The aim of the MCB Project is to consider whether deliberately adding more sea salt CCN to low marine clouds would cool the planet. The CCN would be generated by spraying seawater from ships. We expect that the sprayed seawater would instantly dry in the air and form tiny particles of salt, which would rise to the cloud layer via convection and act as seeds for cloud droplets. These generated particles would be much smaller than the particles from crashing waves, so there would be only a small relative increase in sea salt mass in the atmosphere. The goal would be to produce clouds that are slightly brighter (by 5 to 10 percent) and possibly longer lasting than typical clouds, resulting in more sunlight being reflected back to space.
“Solar climate intervention“ is the umbrella term for projects such as ours that involve reflecting sunlight to reduce global warming and its most dangerous impacts. Other proposals include sprinkling reflective silicate beads over polar ice sheets and injecting materials with reflective properties, such as sulfates or calcium carbonate, into the stratosphere. None of the approaches in this young field are well understood, and they all carry potentially large unknown risks.
Solar climate intervention is not a replacement for reducing greenhouse gas emissions, which is imperative. But such reductions won’t address warming from existing greenhouse gases that are already in the atmosphere. As the effects of climate change intensify and tipping points are reached, we may need options to prevent the most catastrophic consequences to ecosystems and human life. And we’ll need a clear understanding of both the efficacy and risks of solar climate intervention technologies so people can make informed decisions about whether to implement them.
Our team, based at the University of Washington, the Palo Alto Research Center (PARC), and the Pacific Northwest National Laboratory, comprises experts in climate modeling, aerosol-cloud interactions, fluid dynamics, and spray systems. We see several key advantages to marine cloud brightening over other proposed forms of solar climate intervention. Using seawater to generate the particles gives us a free, abundant source of environmentally benign material, most of which would be returned to the ocean through deposition. Also, MCB could be done from sea level and wouldn’t rely on aircraft, so costs and associated emissions would be relatively low.
The effects of particles on clouds are temporary and localized, so experiments on MCB could be carried out over small areas and brief time periods (maybe spraying for a few hours per day over several weeks or months) without seriously perturbing the environment or global climate. These small studies would still yield significant information on the impacts of brightening. What’s more, we can quickly halt the use of MCB, with very rapid cessation of its effects.
Solar climate intervention is the umbrella term for projects that involve reflecting sunlight to reduce global warming and its most dangerous impacts.
Our project encompasses three critical areas of research. First, we need to find out if we can reliably and predictably increase reflectivity. To this end, we’ll need to quantify how the addition of generated sea salt particles changes the number of droplets in these clouds, and study how clouds behave when they have more droplets. Depending on atmospheric conditions, MCB could affect things like cloud droplet evaporation rate, the likelihood of precipitation, and cloud lifetime. Quantifying such effects will require both simulations and field experiments.
Second, we need more modeling to understand how MCB would affect weather and climate both locally and globally. It will be crucial to study any negative unintended consequences using accurate simulations before anyone considers implementation. Our team is initially focusing on modeling how clouds respond to additional CCN. At some point we’ll have to check our work with small-scale field studies, which will in turn improve the regional and global simulations we’ll run to understand the potential impacts of MCB under different climate change scenarios.
The third critical area of research is the development of a spray system that can produce the size and concentration of particles needed for the first small-scale field experiments. We’ll explain below how we’re tackling that challenge.
One of the first steps in our project was to identify the clouds most amenable to brightening. Through modeling and observational studies, we determined that the best target is stratocumulus clouds, which are low altitude (around 1 to 2 km) and shallow; we’re particularly interested in “clean” stratocumulus, which have low numbers of CCN. The increase in cloud albedo with the addition of CCN is generally strong in these clouds, whereas in deeper and more highly convective clouds other processes determine their brightness. Clouds over the ocean tend to be clean stratocumulus clouds, which is fortunate, because brightening clouds over dark surfaces, such as the ocean, will yield the highest albedo change. They’re also conveniently close to the liquid we want to spray.
In the phenomenon called the Twomey effect, clouds with higher concentrations of small particles have a higher albedo, meaning they’re more reflective. Such clouds might be less likely to produce rain, and the retained cloud water would keep albedo high. On the other hand, if dry air from above the cloud mixes in (entrainment), the cloud may produce rain and have a lower albedo. The full impact of MCB will be the combination of the Twomey effect and these cloud adjustments. Rob Wood
Based on our cloud type, we can estimate the number of particles to generate to see a measurable change in albedo. Our calculation involves the typical aerosol concentrations in clean marine stratocumulus clouds and the increase in CCN concentration needed to optimize the cloud brightening effect, which we estimate at 300 to 400 per cubic centimeter. We also take into account the dynamics of this part of the atmosphere, called the marine boundary layer, considering both the layer’s depth and the roughly three-day lifespan of particles within it. Given all those factors, we estimate that a single spray system would need to continuously deliver approximately 3×10 15 particles per second to a cloud layer that covers about 2,000 square kilometers. Since it’s likely that not every particle will reach the clouds, we should aim for an order or two greater.
We can also determine the ideal particle size based on initial cloud modeling studies and efficiency considerations. These studies indicate that the spray system needs to generate seawater droplets that will dry to salt crystals of just 30–100 nanometers in diameter. Any smaller than that and the particles will not act as CCN. Particles larger than a couple hundred nanometers are still effective, but their larger mass means that energy is wasted in creating them. And particles that are significantly larger than several hundred nanometers can have a negative effect, since they can trigger rainfall that results in cloud loss.
We need a clear understanding of both the efficacy and risks of solar climate intervention technologies so people can make informed decisions about whether to implement them.
Creating dry salt crystals of the optimal size requires spraying seawater droplets of 120–400 nm in diameter, which is surprisingly difficult to do in an energy-efficient way. Conventional spray nozzles, where water is forced through a narrow orifice, produce mists with diameters from tens of micrometers to several millimeters. To decrease the droplet size by a factor of ten, the pressure through the nozzle must increase more than 2,000 times. Other atomizers, like the ultrasonic nebulizers found in home humidifiers, similarly cannot produce small enough droplets without extremely high frequencies and power requirements.
Solving this problem required both out-of-the-box thinking and expertise in the production of small particles. That’s where Armand Neukermans came in.
Kate Murphy leads the engineering effort for the MCB project at PARC, the Xerox research lab in Silicon Valley. Christopher Michel
Armand Neukermans brought his expertise in ink jet printers to bear on the quest to make nozzles that could efficiently and reliably spray tiny droplets of seawater. Christopher Michel
After a distinguished career at HP and Xerox focused on production of toner particles and ink jet printers, in 2009 Neukermans was approached by several eminent climate scientists, who asked him to turn his expertise toward making seawater droplets. He quickly assembled a cadre of volunteers—mostly retired engineers and scientists. and over the next decade, these self-designated “Old Salts” tackled the challenge. They worked in a borrowed Silicon Valley laboratory, using equipment scrounged from their garages or purchased out of their own pockets. They explored several ways of producing the desired particle size distributions with various tradeoffs between particle size, energy efficiency, technical complexity, reliability, and cost. In 2019 they moved into a lab space at PARC, where they have access to equipment, materials, facilities, and more scientists with expertise in aerosols, fluid dynamics, microfabrication, and electronics.
The three most promising techniques identified by the team were effervescent spray nozzles, spraying salt water under supercritical conditions, and electrospraying to form Taylor cones (which we’ll explain later). The first option was deemed the easiest to scale up quickly, so the team moved forward with it. In an effervescent nozzle, pressurized air and salt water are pumped into a single channel, where the air flows through the center and the water swirls around the sides. When the mixture exits the nozzle, it produces droplets with sizes ranging from tens of nanometers to a few micrometers, with the overwhelming number of particles in our desired size range. Effervescent nozzles are used in a range of applications, including engines, gas turbines, and spray coatings.
The key to this technology lies in the compressibility of air. As a gas flows through a constricted space, its velocity increases as the ratio of the upstream to downstream pressures increases. This relationship holds until the gas velocity reaches the speed of sound. As the compressed air leaves the nozzle at sonic speeds and enters the environment, which is at much lower pressure, the air undergoes a rapid radial expansion that explodes the surrounding ring of water into tiny droplets.
Coauthor Gary Cooper and intern Jessica Medrado test the effervescent nozzle inside the tent. Kate Murphy
Neukermans and company found that the effervescent nozzle works well enough for small-scale testing, but the efficiency—the energy required per correctly sized droplet—still needs to be improved. The two biggest sources of waste in our system are the large amounts of compressed air needed and the large fraction of droplets that are too big. Our latest efforts have focused on redesigning the flow paths in the nozzle to require smaller volumes of air. We’re also working to filter out the large droplets that could trigger rainfall. And to improve the distribution of droplet size, we’re considering ways to add charge to the droplets; the repulsion between charged droplets would inhibit coalescence, decreasing the number of oversized droplets.
Though we’re making progress with the effervescent nozzle, it never hurts to have a backup plan. And so we’re also exploring electrospray technology, which could yield a spray in which almost 100 percent of the droplets are within the desired size range. In this technique, seawater is fed through an emitter—a narrow orifice or capillary—while an extractor creates a large electric field. If the electrical force is of similar magnitude to the surface tension of the water, the liquid deforms into a cone, typically referred to as a Taylor cone. Over some threshold voltage, the cone tip emits a jet that quickly breaks up into highly charged droplets. The droplets divide until they reach their Rayleigh limit, the point where charge repulsion balances the surface tension. Fortuitously, surface seawater’s typical conductivity (4 Siemens per meter) and surface tension (73 millinewtons per meter) yield droplets in our desired size range. The final droplet size can even be tuned via the electric field down to tens of nanometers, with a tighter size distribution than we get from mechanical nozzles.
This diagram (not to scale) depicts the electrospray system, which uses an electric field to create cones of water that break up into tiny droplets. Kate Murphy
Electrospray is relatively simple to demonstrate with a single emitter-extractor pair, but one emitter only produces 10 7–109 droplets per second, whereas we need 1016–1017 per second. Producing that amount requires an array of up to 100,000 by 100,000 capillaries. Building such an array is no small feat. We’re relying on techniques more commonly associated with cloud computing than actual clouds. Using the same lithography, etch, and deposition techniques used to make integrated circuits, we can fabricate large arrays of tiny capillaries with aligned extractors and precisely placed electrodes.
Images taken by a scanning electron microscope show the capillary emitters used in the electrospray system. Kate Murphy
Testing our technologies presents yet another set of challenges. Ideally, we would like to know the initial size distribution of the saltwater droplets. In practice, that’s nearly impossible to measure. Most of our droplets are smaller than the wavelength of light, precluding non-contact measurements based on light scattering. Instead, we must measure particle sizes downstream, after the plume has evolved. Our primary tool, called a scanning electrical mobility spectrometer, measures the mobility of charged dry particles in an electrical field to determine their diameter. But that method is sensitive to factors like the room’s size and air currents and whether the particles collide with objects in the room.
To address these problems, we built a sealed 425 cubic meter tent, equipped with dehumidifiers, fans, filters, and an array of connected sensors. Working in the tent allows us to spray for longer periods of time and with multiple nozzles, without the particle concentration or humidity becoming higher than what we would see in the field. We can also study how the spray plumes from multiple nozzles interact and evolve over time. What’s more, we can more precisely mimic conditions over the ocean and tune parameters such as air speed and humidity.
Part of the team inside the test tent; from left, “Old Salts” Lee Galbraith and Gary Cooper, Kate Murphy of PARC, and intern Jessica Medrado. Kate Murphy
We’ll eventually outgrow the tent and have to move to a large indoor space to continue our testing. The next step will be outdoor testing to study plume behavior in real conditions, though not at a high enough rate that we would measurably perturb the clouds. We’d like to measure particle size and concentrations far downstream of our sprayer, from hundreds of meters to several kilometers, to determine if the particles lift or sink and how far they spread. Such experiments will help us optimize our technology, answering such questions as whether we need to add heat to our system to encourage the particles to rise to the cloud layer.
The data obtained in these preliminary tests will also inform our models. And if the results of the model studies are promising, we can proceed to field experiments in which clouds are brightened sufficiently to study key processes. As discussed above, such experiments would be performed over a small and short time so that any effects on climate wouldn’t be significant. These experiments would provide a critical check of our simulations, and therefore of our ability to accurately predict the impacts of MCB.
It’s still unclear whether MCB could help society avoid the worst impacts of climate change, or whether it’s too risky, or not effective enough to be useful. At this point, we don’t know enough to advocate for its implementation, and we’re definitely not suggesting it as an alternative to reducing emissions. The intent of our research is to provide policymakers and society with the data needed to assess MCB as one approach to slow warming, providing information on both its potential and risks. To this end, we’ve submitted our experimental plans for review by the U.S. National Oceanic and Atmospheric Administration and for open publication as part of a U.S. National Academy of Sciences study of research in the field of solar climate intervention. We hope that we can shed light on the feasibility of MCB as a tool to make the planet safer.