EE380 Computer Systems Colloquium

EE380 Computer Systems Colloquium: Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing

Topic: 
Enabling NLP, Machine Learning, and Few-Shot Learning using Associative Processing
Abstract / Description: 

This presentation details a fully programmable, associative, content-based, compute in-memory architecture that changes the concept of computing from serial data processing--where data is moved back and forth between the processor and memory--to massive parallel data processing, compute, and search directly in-place.

This associative processing unit (APU) can be used in many machine learning applications, one-shot/few-shot learning, convolutional neural networks, recommender systems and data mining tasks such as prediction, classification, and clustering.

Additionally, the architecture is well-suited to processing large corpora and can be applied to Question Answering (QA) and various NLP tasks such as language translation. The architecture can embed long documents and compute in-place any type of memory network and answer complex questions in O(1).

Date and Time: 
Wednesday, November 8, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Petascale Deep Learning on a Single Chips

Topic: 
Petascale Deep Learning on a Single Chips
Abstract / Description: 

Vathys.ai is a deep learning startup that has been developing a new deep learning processor architecture with the goal of massively improved energy efficiency and performance. The architecture is also designed to be highly scalable, amenable to next generation DL models. Although deep learning processors appear to be the "hot topic" of the day in computer architecture, the majority (we argue all) of such designs incorrectly identify the bottleneck as computation and thus neglect the true culprits in inefficiency; data movement and miscellaneous control flow processor overheads. This talk will cover many of the architectural strategies that the Vathys processor uses to reduce data movement and improve efficiency. The talk will also cover some circuit level innovations and will include a quantitative and qualitative comparison to many DL processor designs, including the Google TPU, demonstrating numerical evidence for massive improvements compared to the TPU and other such processors.

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: NLV Agents

Topic: 
NLV Agents
Abstract / Description: 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Partisan Gerrymandering and the Supreme Court: The Role of Social Science

Topic: 
Partisan Gerrymandering and the Supreme Court: The Role of Social Science
Abstract / Description: 

The U.S. Supreme Court is considering a case this term, Gill v Whitford, that might lead to the first constitutional constraints on partisanship in redistricting. Eric McGhee is the inventor of the efficiency gap, a measure of gerrymandering that the court is considering in the case. He will describe the case's legal background, discuss some of the metrics that have been proposed for measuring gerrymandering, and reflect on the role of social science in the litigation.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computing with High-Dimensional Vectors

Topic: 
Computing with High-Dimensional Vectors
Abstract / Description: 

Computing with high-dimensional vectors complements traditional computing and occupies the gap between symbolic AI and artificial neural nets. Traditional computing treats bits, numbers, and memory pointers as basic objects on which all else is built. I will consider the possibility of computing with high-dimensional vectors as basic objects, for example with 10,000-bit words, when no individual bit nor subset of bits has a meaning of its own--when any piece of information encoded into a vector is distributed over all components. Thus a traditional data record subdivided into fields is encoded as a high-dimensional vector with the fields superposed.

Computing power arises from the operations on the basic objects--from what is called their algebra. Operations on bits form Boolean algebra, and the addition and multiplication of numbers form an algebraic structure called a "field." Two operations on high-dimensional vectors correspond to the addition and multiplication of numbers. With permutation of coordinates as the third operation, we end up with a system of computing that in some ways is richer and more powerful than arithmetic, and also different from linear algebra. Computing of this kind was anticipated by von Neumann, described by Plate, and has proven to be possible in high-dimensional spaces of different kinds.

The three operations, when applied to orthogonal or nearly orthogonal vectors, allow us to encode, decode and manipulate sets, sequences, lists, and arbitrary data structures. One reason for high dimensionality is that it provides a nearly endless supply of nearly orthogonal vectors. Making of them is simple because a randomly generated vector is approximately orthogonal to any vector encountered so far. The architecture includes a memory which, when cued with a high-dimensional vector, finds its nearest neighbors among the stored vectors. A neural-net associative memory is an example of such.

Circuits for computing in high-D are thousands of bits wide but the components need not be ultra-reliable nor fast. Thus the architecture is a good match to emerging nanotechnology, with applications in many areas of machine learning. I will demonstrate high-dimensional computing with a simple algorithm for identifying languages.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Generalized Reversible Computing and the Unconventional Computing Landscape

Topic: 
Generalized Reversible Computing and the Unconventional Computing Landscape
Abstract / Description: 

With the end of transistor scaling now in sight, the raw energy efficiency (and thus, practical performance) of conventional digital computing is expected to soon plateau. Thus, there is presently a growing interest in exploring various unconventional types of computing that may have the potential to take us beyond the limits of conventional CMOS technology. In this talk, I survey a range of unconventional computing approaches, with an emphasis on reversible computing (defined in an appropriately generalized way), which fundamental physical arguments indicate is the only possible approach that can potentially increase energy efficiency and affordable performance of arbitrary computations by unboundedly large factors as the technology is further developed.

Date and Time: 
Wednesday, October 18, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: scratchwork, a tool for developing and communicating technical ideas

Topic: 
scratchwork: a tool for developing and communicating technical ideas
Abstract / Description: 

Digital tablets are no longer new or even expensive, but most of us still struggle to input our technical ideas (such as equations and diagrams) into a computer as easily as we write them on paper. I will discuss relevant existing technology and present scratchworktool.com, a tool designed to help simplify the digital writing process even without a tablet. I will also cover some of the important decisions and mistakes I made especially as I started building it. I hope these lessons will be helpful for anyone who is (or may eventually be) interested in developing similarly sophisticated products to solve a consumer-facing problem.

Date and Time: 
Wednesday, October 11, 2017 - 4:30pm
Venue: 
Gates B03

NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning [EE380 Computer Systems Colloquium]

Topic: 
NVIDIA GPU Computing: A Journey from PC Gaming to Deep Learning
Abstract / Description: 

Deep Learning and GPU Computing are now being deployed across many industries, helping to solve big data problems ranging from computer vision and natural language-processing to self-driving cars. At the heart of these solutions is the NVIDIA GPU, providing the computing power to both train these massive deep neural networks as well as efficiently provide inference and implementation of those networks. But how did the GPU get to this point?

In this talk I will present a personal perspective and some lessons learned during the GPU's journey and evolution from being the heart of the PC gaming platform, to today also powering the world's largest datacenters and supercomputers.

Date and Time: 
Wednesday, October 4, 2017 - 4:30pm
Venue: 
Gates B03

The Machineries of Doubt & Disinformation: Cigarettes, Climate & Other Electronic Confusions [EE380 Computer Systems Colloquium]

Topic: 
The Machineries of Doubt & Disinformation: Cigarettes, Climate & Other Electronic Confusions
Abstract / Description: 

The tobacco industry has long employed the best marketing techniques and adopted the latest technologies for disinformation. The fossil energy industries have employed similar tactics and technologies. For both, the Internet has proved a fertile ground and by now, similar tactics have gained force in politics.

For example, 2009 "Climategate" theft and use of emails against climate scientists seems a precursor of recent Russian efforts in American & French elections.

This talk uses insights from the well-documented history of tobacco and fossil disinformation machinery to anticipate further attacks on science and political processes, including thoughts about the challenges of informed skepticism in the world of Internet, Twitter and Facebook and electronic cigarettes that monitor and control usage, and may report back to the vendor.

Date and Time: 
Wednesday, May 10, 2017 - 4:30pm
Venue: 
Gates B03

Ethics, Algorithms, and Systems [EE380 Computer Systems]

Topic: 
Ethics, Algorithms, and Systems
Abstract / Description: 

The Internet has made possible new means of manipulating opinions, purchases and votes that are unprecedented in human history in their effectiveness, scale and clandestine nature. Whether closely guided by human hands or operating independently of their creators, these algorithms now guide human decision making 24/7, often in ways that have ethical consequences. Biased search rankings, for example, have been shown to shift the voting preferences of undecided voters dramatically without any awareness on their part that they are being manipulated (the Search Engine Manipulation Effect, or SEME).

Recent research shows that SEME can impact a wide range of opinions, not just voting preferences, and that multiple searches increase SEME's impact. New experiments also help to explain why SEME is so powerful and demonstrate how SEME can be suppressed to some extent.

In 2016, new research also demonstrated that search suggestions (in "autocomplete") can also be used shift opinions and votes (the Search Suggestion Effect, or SSE).

Demonstrating these possibilities in research is one thing; do search engine companies actually show people search suggestions or search results that are biased in some way?

In 2016, AIBRT researchers recruited a nationwide network of field agents whose election-related searches were collected and aggregated for six months before the November election, thus preserving 13,207 searchers and the 98,044 web pages to which the search results linked. This unique data set revealed that that search results were indeed biased toward one candidate during most of this period in all 10 search positions on the first page of search results - enough, perhaps, to shift millions of votes without people's knowledge.

Based on the success of this tracking effort, in early 2017, experts in multiple fields and at multiple universities in the US and Europe came together to creates The Sunlight Society (http://TheSunlightSociety.org), a nonprofit organization devoted to creating a worldwide ecosystem of passive monitoring software that will reveal a wide range of online manipulations as they are occurring, thus providing a means for identifying unethical algorithms as they are launched.

Date and Time: 
Wednesday, June 7, 2017 - 4:30pm
Venue: 
Gates B03

Pages

Subscribe to RSS - EE380 Computer Systems Colloquium