research

October 2018

Professor Balaji Prabhakar researches traffic and routes and how a nudge can help us better manage our commute. Stanford Radio host Russ Altman speaks with Balaji on "The Future of Everything". 

While well-known mapping apps have transformed the daily commute through better information, Balaji Prabhakar is exploring ways to digitally incentivize people to improve their driving habits.

He calls it "nudging," and says that small shifts in commute times — just 20 minutes earlier or later — can make a considerable impact on the day's congestion in highly trafficked urban areas, like San Francisco.

A few years ago, Prabhakar made headlines with a Stanford-only study that used small monetary incentives backed by larger lottery-like rewards to reduce peak-hour commuting on campus. He later undertook a similar but much larger effort in Singapore to promote off-peak train travel. In four years, participation in Singapore grew from 20,000 to 400,000 users.

Listen to the Stanford Radio, "Nudging your Commute with guest Balaji Prabhakar"

Excerpted from the School of Engineering, Research & Ideas, "Balaji Prabhakar: Can digital incentives help alleviate traffic?" October 2018. 

Related News

"Research by EE PhD candidates Geng, Liu, and Yin featured in NYT Tech article", July 2018.

"Balaji Prabhakar has been named ACM Fellow", December 2017.

Yilong Geng (EE PhD candidate) presenting at NSDI '18
July 2018

Interdisciplinary research between professor Balaji Prabhakar, his team, and Google has produced a software clock synchronization system that can track time down to 100 billionths of a second.

The paper, presented at NSDI '18, describes a nanosecond-level clock synchronization that can be an enabler of a new spectrum of timing- and delay-critical applications in data centers.

The current, popular clock synchronization algorithm, NTP, can only achieve millisecond-level accuracy. Current solutions for achieving a synchronization accuracy of 10s-100s of nanoseconds require specially designed hardware throughout the network for combatting random network delays and component noise or to exploit clock synchronization inherent in Ethernet standards for the PHY.

The research team presents HUYGENS, named for the Dutch physicist Christiaan Huygens, who invented the pendulum clock in 1656. HUYGENS is a software clock synchronization system that uses a synchronization network and leverages three key ideas. First, coded probes identify and reject impure probe data—data captured by probes which suffer queuing delays, random jitter, and NIC timestamp noise. HUYGENS then processes the purified data with Support Vector Machines, a widely-used and powerful classifier, to accurately estimate one-way propagation times and achieve clock synchronization to within 100 nanoseconds. Finally, HUYGENS exploits a natural network effect—the idea that a group of pair-wise synchronized clocks must be transitively synchronized— to detect and correct synchronization errors even further.

The importance of technical advances in measuring time was underscored by European regulations that went into effect in January and that require financial institutions to synchronize time-stamped trades with microsecond accuracy.

Being able to trade at the nanosecond level is vital to Nasdaq. Two years ago, it debuted the Nasdaq Financial Framework, a software system that it has envisioned eventually trading everything from stocks and bonds to fish and car-sharing rides.

The new synchronization system will make it possible for Nasdaq to offer "pop-up" electronic markets on short notice anywhere in the world, Mr. Prabhakar said. He cited the World Cup as a hypothetical example of a short-term electronic marketplace.

"There are tickets needed, housing, people will need transportation," he said. "Think of an electronic market almost like a massive flea market hosted by Nasdaq software."

The HUYGENS team is Yilong Geng (EE PhD candidate), Shiyu Liu (EE PhD candidate), and Zi Yin (EE PhD candidate), Ashish Naik (Google Inc.) EE professors Balaji Prabhakar and Mendel Rosenblum, and Amin Vahdat (Google Inc.)

 

Related Links (excerpted from)

July 2018

Congratulations to professors Jon Fan and Juan Rivas-Davila! Two of their researchers won the 2018 NASA iTech Forum. The event is a collaborative effort between NASA and the U.S. Department (DOE) of Energy's Advanced Research Projects Agency-Energy (ARPA-E) to find and foster innovative solutions for critical energy challenges on Earth and in space.

The winning project was presented by Grayson Zulauf and Thaibao (Peter) Phan. Both are PhD candidates. Their collaborative project is developing technology for wireless charging of electric vehicles on Earth, and eventually, Mars. The researchers received invaluable feedback from NASA and DOE's ARPA-E leaders, as well as experts in the field of advanced energy technology.

"NASA is proud to provide a platform for innovators that exposes them to a cadre of industry experts who will be instrumental in the development of their technologies," said Kira Blackwell, NASA iTech program executive for STMD. "NASA's chief technologists and the U.S. Department of Energy's leading subject matter experts provided the teams with a better understanding of requirements for potential infusion of their technologies within a space environment."

Judges selected the top three innovations based on criteria including technical viability, the likely impact on future space exploration, benefits to humanity and commercialization potential. The teams representing the top three entries selected at the end of the forum received a trophy during the recognition ceremony on June 14.

"Our mission at ARPA-E is to change what's possible. We've been delighted to collaborate with NASA for the iTech challenge, to highlight and empower the people driving energy innovation across our country," said Conner Prochaska, senior advisor and chief of staff for ARPA-E. "We look forward to future collaborative opportunities with NASA so, together, we can continue to cultivate the next generation of energy technologies for Americans on the ground and in space."

"It was an honor for Citi to host 'Energy-Tech' thought leaders -- policy makers, academics, scientists, investors and innovators -- for NASA iTech challenge," said Jay Collins, vice chairman of Corporate and Investment Banking at Citi. "We were proud to work with NASA on such an important effort to move energy technology out of the lab and into scalnble solutions for the Moon, Mars and the planet Earth. Congratulations to the winners, whose technological leadership and entrepreneurialism made us all proud."

The top three winners of NASA iTech's 2018 Energy Cycle are listed in alphabetical order:

  • iFeather, Boulder, Colorado. In-situ Fabrication of Extraterrestrial Aerogels for Transparency, Heat, and Energy Regulation (iFEATHER) for Habitat, Aeronautic and Space Vessel, and Space Suit Applications. Focus area: Innovative Power Management and Distribution
  • Stanford University - Department of Electrical Engineering, Stanford, California. Two C: Transportation Electrification through Ubiquitous Wireless Charging. Focus area: Innovative Power Management and Distribution
  • WBGlobalSemi, Inc., Lakewood Ranch, Florida. Commercializing High Power Silicon Carbide (SiC) Bipolar Junction Transistors (BJTs) and Power Modules for Power Management and Distributed Power Applications. Focus area: Innovative Power Management and Distribution

 

Grayson Zulauf (third from left) is an EE PhD candidate. He is a researcher in the SUPERLab, directed by Professor Juan Rivas-Davila. the Fan Lab is directed by professor Jonathan Fan.

 

 

Congratulations Jon, Juan, Grayson and Peter!

Nikhil Garg, EE PhD '20 interdisciplinary research using machine-learning
April 2018

Lead author Nikhil Garg (PhD candidate '20) demonstrates that word embeddings can be used as a powerful tool to quantify historical trends and social change. His research team developed metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Their framework opens up a fruitful intersection between machine learning and quantitative social science.

Nikhil co-authored the paper with history Professor Londa Schiebinger, linguistics and computer science Professor Dan Jurafsky and biomedical data science Professor James Zou.

Their research shows that, over the past century, linguistic changes in gender and ethnic stereotypes correlated with major social movements and demographic changes in the U.S. Census data.

The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words – to measure changes in gender and ethnic stereotypes over the past century in the United States. They analyzed large databases of American books, newspapers and other texts and looked at how those linguistic changes correlated with actual U.S. Census demographic data and major social shifts such as the women's movement in the 1960s and the increase in Asian immigration, according to the research.

"Word embeddings can be used as a microscope to study historical changes in stereotypes in our society," said James Zou, a courtesy professor of electrical engineering. "Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases."

"This type of research opens all kinds of doors to us," Schiebinger said. "It provides a new level of evidence that allow humanities scholars to go after questions about the evolution of stereotypes and biases at a scale that has never been done before."

"The starkness of the change in stereotypes stood out to me," Garg said. "When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate." 

The new research illuminates the value of interdisciplinary teamwork between humanities and the sciences, researchers said.

"This led to a very interesting and fruitful collaboration," Schiebinger said, adding that members of the group are working on further research together. "It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood." 


 

Proceedings of the National Academy of Sciences, "Word embeddings quantify 100 years of gender and ethnic stereotypes" April 3,2018.  

Excerpted from Stanford News, "Stanford researchers use machine-learning algorithm to measure changes in gender, ethnic bias in U.S." April 3, 2018.

 

 

 

Graduate student David Lindell and Matt O’Toole, a post-doctoral scholar, work in the lab. (Image credit: L.A. Cicero)
March 2018

A driverless car is making its way through a winding neighborhood street, about to make a sharp turn onto a road where a child’s ball has just rolled. Although no person in the car can see that ball, the car stops to avoid it. This is because the car is outfitted with extremely sensitive laser technology that reflects off nearby objects to see around corners.

“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper describing this work, published March 5 in Nature.

Related Links

image credit: L. Cicero
February 2018


Krishna Shenoy and team have been researching the use of brain machine interfaces (BMI) to assist people with paralysis. Recently, one of the researchers changed the task, requiring physical movement from a change in thought. He realized that the BMI would allow study of the mental rehearsal that occurs before the physical expression.

Although there are some important caveats, the results could point the way toward a deeper understanding of what mental rehearsal is and, the researchers believe, to a future where brain-machine interfaces, usually thought of as prosthetics for people with paralysis, are also tools for understanding the brain.

"Mental rehearsal is tantalizing, but difficult to study," said Saurabh Vyas, a graduate student in bioengineering and the paper's lead author. That's because there's no easy way to peer into a person's brain as he imagines himself racing to a win or practicing a performance. "This is where we thought brain-machine interfaces could be that lens, because they give you the ability to see what the brain is doing even when they're not actually moving," he said.

"We can't prove the connection beyond a shadow of a doubt," Krishna said, but "this is a major step in understanding what mental rehearsal may well be in all of us." The next steps, he and Vyas said, are to figure out how mental rehearsal relates to practice with a brain-machine interface – and how mental preparation, the key ingredient in transferring that practice to physical movements, relates to movement.

Meanwhile, Krishna said, the results demonstrate the potential of an entirely new tool for studying the mind. "It's like building a new tool and using it for something," he said. "We used a brain-machine interface to probe and advance basic science, and that's just super exciting."

Additional Stanford authors are Nir Even-Chen, a graduate student in electrical engineering, Sergey Stavisky, a postdoctoral fellow in neurosurgery, Stephen Ryu, an adjunct professor of electrical engineering, and Paul Nuyujukian, an assistant professor of bioengineering and of neurosurgery and a member of Stanford Bio-X and the Stanford Neurosciences Institute.

Funding for the study came from the National Institutes of Health, the National Science Foundation, a Ric Weiland Stanford Graduate Fellowship, a Bio-X Bowes Fellowship, the ALS Association, the Defense Advanced Research Projects Agency, the Simons Foundation and the Howard Hughes Medical Institute.

Excerpted from Stanford News, "Mental rehearsal prepares our minds for real-world action, Stanford researchers find," February 16, 2018.

 

Related News:

Research by PhD candidate and team detects errors from Neural Activity, November 2017.

Krishna Shenoy's translation device; turning thought into movement, March 2017.

Brain-Sensing Tech Developed by Krishna Shenoy and Team, September 2016.

Krishna Shenoy receives Inaugural Professorship, February 2017.

 

February 2018

Angad Rekhi (PhD candidate) and Amin Arbabian have developed a wake-up receiver that turns on a device in response to incoming ultrasonic signals – signals outside the range that humans can hear. By working at a significantly smaller wavelength and switching from radio waves to ultrasound, this receiver is much smaller than similar wake-up receivers that respond to radio signals, while operating at extremely low power and with extended range.

This wake-up receiver has many potential applications, particularly in designing the next generation of networked devices, including so-called "smart" devices that can communicate directly with one another without human intervention.

"As technology advances, people use it for applications that you could never have thought of. The internet and the cellphone are two great examples of that," said Rekhi. "I'm excited to see how people will use wake-up receivers to enable the next generation of the Internet of Things."

Excerpted from Stanford News, "Stanford researchers develop new method for waking up small electronic devices", February 12, 2018

 

Related news:

Amin's Research Team Powers Tiny Implantable Devices, December 2017.

Stanford Team led by Amin Arbabian receives DOE ARPA-E Award, January 2017.

Amin Arbabian receives Tau Beta Pi Undergrad Teaching Award, June 2016.

PhD candidate Nir Even-Chen
November 2017

PhD candidate Nir Even-Chen and his advisor, professor Krishna Shenoy, et al., share recent strides in brain-machine interface (BMI) innovation. BMIs are devices that record neural activity from the user's brain and translate it into movement of prosthetic devices. BMIs enable people with motor impairment, e.g. a spinal cord injury, to control and move prosthetic devices with their minds. They can control robotic arms for improving their independence or a computer cursor for typing and browsing the web. Even-Chen, et al's, recently published paper, "Augmenting intracortical brain-machine interface with neurally driven error detectors," describes a new system that reads users minds, detects when the user perceives a mistake, and intervenes with a corrective action. The new system allows users to control BMIs more easily, smoothly, and efficiently.

While most BMI studies focus on designing better techniques to infer the user's movement intention, Even-Chen, et al, improved the BMI performance by taking a very different approach, detecting and undoing mistakes. Their work presents both novel fundamental science and implementation of their idea. They showed for the first time that it is possible to detect key-selection errors from the motor cortex — a brain area mainly involved in movement control. Then, they used the data in real-time to undo—or even prevent—mistakes.

The need for real-time error correction

In our daily life, we all make mistakes, from typos during texting, clicking the wrong link on a web page, or knocking our cup of coffee over while reaching for the cake. Correcting these mistakes might be time-consuming, and annoying—especially when they occur frequently during challenging tasks. Imagine a system that could detect – or predict – your mistakes (e.g., typos) and automatically undo, or even prevent them from happening. This can save the time of manually correcting the mistake, especially when the errors are frequent and the actions to correct them slow you down. Error detection is not always trivial, in some cases only the person who made the mistake knows what she intended. Thus, such an error detection system needs to infer one's intention, i.e., read her mind. An automatic error detection system is most effective when the task is challenging or when our skill is limited, and errors are common. A good candidate for testing such an error detection approach is a BMI system. First, BMIs enable a readout of the user's mind. And second, it can be highly beneficial for BMI users, since BMI control is challenging and prone to errors.

Intracortical BMIs, which records neural activity directly from the brain, showed a promising result in pilot clinical trials and are the highest-performing BMI systems to date. This makes them prime candidates for serving as an assistive technology for people with paralysis. Although the performance of intracortical BMI systems has markedly improved in the last two decades, errors — such as selecting the wrong key during typing — still occur and their performance is far from able-bodied performance. 

Previously it was unknown if errors can be detected from the same brain region traditionally used for decoding BMI user's movement intention—the motor cortex. In their work, Even-Chen and colleagues found that when errors occur a characteristic brain activity can be observed. That brain activity pattern enables them to detect mistakes with high accuracy shortly after and even before they occurred.

This finding encouraged them to develop and implement first-of-its-kind error "detect-and-act" system. This system reads the user's mind, detects when the user thinks an error occurred, and can automatically "undo" or "prevent" them. The detect-and-act system works independently and in parallel to a traditional movement BMI that estimate user's movement intention (see figure). In a challenging BMI task that resulted in substantial errors, this approach improved the performance of a BMI. Using the detect-and-act system, hard tasks will have fewer errors and become easier, the use of a BMI will become smoother, and be less frustrating.

A detect-and-act system can potentially be used to improve how fast people with paralysis can type or control a robotic arm using a BMI. For example, automatically correcting a mistake when they type, or stopping the movement of a robotic arm when they are about to knock over their coffee. While this work has been done in pre-clinical trial with monkeys, Even-Chen and colleagues also presented encouraging preliminary results of a clinical trial (BrainGate2) at a conference, and showed the potential translation to humans.

 

Read more: Journal of Neural Engineering, "Augmenting intracortical brain-machine interface with neurally driven error detectors."
Additional authors include Sergey Stavisky, Jonathan Kao, Stephen Ryu, and Krishna Shenoy. 

 

July 2017

One day soon we may live in smart houses that cater to our habits and needs, or ride in autonomous cars that rely on embedded sensors to provide safety and convenience. But today's electronic devices may not be able handle the deluge of data such applications portend because of limitations in their materials and design, according to the authors of a Stanford-led experiment recently published in Nature.

To begin with, silicon transistors are no longer improving at their historic rate, which threatens to end the promise of smaller, faster computing known as Moore's Law. A second and related reason is computer design, say senior authors and Stanford EE professors Subhasish Mitra and H.-S. Philip Wong. Today's computers rely on separate logic and memory chips. These are laid out in two dimensions, like houses in a suburb, and connected by tiny wires, or interconnects, that become bottlenecked with data traffic.

Now, the Stanford team has created a chip that breaks this bottleneck in two ways: first, by using nanomaterials not based on silicon for both logic and memory, and second, by stacking these computation and storage layers vertically, like floors in a high-rise, with a plethora of elevator-like interconnects between the "floors" to eliminate delays. "This is the largest and most complex nanoelectronic system that has so far been made using the materials and nanotechnologies that are emerging to leapfrog silicon," said Mitra.

The team, whose other Stanford members include EE professors Roger Howe and Krishna Saraswat, integrated over 2 million non-silicon transistors and 1 million memory cells, in addition to on-chip sensors for detecting gases – a proof of principle for other tasks yet to be devised. "Electronic devices of these materials and three-dimensional design could ultimately give us computational systems 1,000 times more energy-efficient than anything we can build of silicon," Wong said.

First author Max Shulaker (PhD '16), who performed this work while a PhD candidate, is now an assistant professor at MIT and core member of its Microsystems Technology Laboratories. He explained in a single word why the team had to use emerging nanotechnologies and not conventional silicon technologies to achieve the high-rise design: heat. "Building silicon transistors involves temperatures of over 1,000 degrees Celsius," Shulaker said. "If you try to build a second layer on top of the first, you'll damage the bottom layer. This is why chips today have a single layer of circuitry."

The magic of the materials

The new prototype chip is a radical change from today's chips because it uses multiple nanotechnologies that can be fabricated at relatively low heat, Shulaker explained. Instead of relying on silicon-based transistors, the new chip uses carbon nanotubes, or CNTs, to perform computations. CNTs are sheets of 2-D carbon formed into nanocylinders. The new Naturepaper incorporates prior ground-breaking work by this team in developing the world's first all-CNT computer.

The memory component of the new chip also relied on new processes and materials improved upon by this team. Called resistive random-access memory (RRAM), this is a type of nonvolatile memory — meaning that it doesn't lose data when the power is turned off – that operates by changing the resistance of a solid dielectric material.

The key in this work is that CNT circuits and RRAM memory can be fabricated at temperatures below 200 Celsius. "This means they can be built up in layers without harming the circuits beneath," Shulaker says. "This truly is a remarkable feat of engineering," says Barbara De Salvo, scientific director at CEA-LETI, France, an international expert not connected with this project.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting a plethora of wires between these layers, this 3-D architecture promises to address the communication bottleneck. "In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips," Saraswat said.

To demonstrate the potential of the technology, the researchers placed over a million carbon nanotube-based sensors on the surface of the chip, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage and computing, the chip was able to measure each of the sensors in parallel and then write directly into its memory, generating huge bandwidth without risk of hitting a bottleneck, because the 3-D design made it unnecessary to move data between chips. In fact, even though Shulaker built the chip using the limited capabilities of an academic fabrication facility, the peak bandwidth between vertical layers of the chip could potentially approach and exceed the peak memory bandwidth of the most sophisticated silicon-based technologies available today.

System benefits

This provides several simultaneous benefits for future computing systems.

"As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information," Mitra says.

Energy efficiency is another benefit. "Logic made from carbon nanotubes will be ten times more energy efficient as today's logic made from silicon," Wong said. "RRAM can also be denser, faster and more energy-efficient than the memory we use today."

Thanks to the ground-breaking approach embodied by the Nature paper, the work is getting attention from leading scientists who are not directly connected with the research. Jan Rabaey, a professor of electrical engineering and computer sciences at the University of California, Berkeley, said 3-D chip architecture is such a fundamentally different approach that it may have other, more futuristic benefits to the advance of computing. "These [3-D] structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets," Rabaey said, adding, "The approach presented by the authors is definitely a great first step in that direction."

 

This work was funded by the Defense Advanced Research Projects Agency, National Science Foundation, Semiconductor Research Corporation, STARnet SONIC and member companies of the Stanford SystemX Alliance.


 

This story is a revised version of a press release by MIT News correspondent Helen Knight.

August 2017

The next generation of feature-filled and energy-efficient electronics will require computer chips just a few atoms thick. For all its positive attributes, trusty silicon can't take us to these ultrathin extremes.

Now, electrical engineers at Stanford have identified two semiconductors – hafnium diselenide and zirconium diselenide – that share or even exceed some of silicon's desirable traits, starting with the fact that all three materials can "rust."

"It's a bit like rust, but a very desirable rust," said Eric Pop, an associate professor of electrical engineering, who co-authored with post-doctoral scholar Michal Mleczko a paper that appears in the journal Science Advances.

The new materials can also be shrunk to functional circuits just three atoms thick and they require less energy than silicon circuits. Although still experimental, the researchers said the materials could be a step toward the kinds of thinner, more energy-efficient chips demanded by devices of the future.

Silicon's strengths
Silicon has several qualities that have led it to become the bedrock of electronics, Pop explained. One is that it is blessed with a very good "native" insulator, silicon dioxide or, in plain English, silicon rust.

Exposing silicon to oxygen during manufacturing gives chip-makers an easy way to isolate their circuitry. Other semiconductors do not "rust" into good insulators when exposed to oxygen, so they must be layered with additional insulators, a step that introduces engineering challenges. Both of the diselenides the Stanford group tested formed this elusive, yet high-quality insulating rust layer when exposed to oxygen.

Not only do both ultrathin semiconductors rust, they do so in a way that is even more desirable than silicon. They form what are called "high-K" insulators, which enable lower power operation than is possible with silicon and its silicon oxide insulator.

As the Stanford researchers started shrinking the diselenides to atomic thinness, they realized that these ultrathin semiconductors share another of silicon's secret advantages: the energy needed to switch transistors on – a critical step in computing, called the band gap – is in a just-right range. Too low and the circuits leak and become unreliable. Too high and the chip takes too much energy to operate and becomes inefficient. Both materials were in the same optimal range as silicon.

All this and the diselenides can also be fashioned into circuits just three atoms thick, or about two-thirds of a nanometer, something silicon cannot do.

The combination of thinner circuits and desirable high-K insulation means that these ultrathin semiconductors could be made into transistors 10 times smaller than anything possible with silicon today.

"Silicon won't go away. But for consumers this could mean much longer battery life and much more complex functionality if these semiconductors can be integrated with silicon," Pop said.

More work to do
There is much work ahead. First, Mleczko and Pop must refine the electrical contacts between transistors on their ultrathin diselenide circuits. "These connections have always proved a challenge for any new semiconductor, and the difficulty becomes greater as we shrink circuits to the atomic scale," Mleczko said.

They are also working to better control the oxidized insulators to ensure they remain as thin and stable as possible. Last, but not least, only when these things are in order will they begin to integrate with other materials and then to scale up to working wafers, complex circuits and, eventually, complete systems.

"There's more research to do, but a new path to thinner, smaller circuits – and more energy-efficient electronics – is within reach," Pop said.


 

 

Reprinted from Stanford Magazine, "New semiconductor materials exceed some of silicon's 'secret' powers," August 14,2017

Pages

Subscribe to RSS - research