Graduate

lab64 - open lab / office hours

Topic: 
Office Hours - get help with projects
Abstract / Description: 

From Rabbit Hole VR Club: Tutorials and instructional office hours will be held during Fall quarter for students who are interested in XR but don't have much experience. There will be weekly office hours (Sundays 5 - 6 p.m., lab64) with an experienced core member who will help with weekly assignments starting week 3. The content covered, courtesy of Udacity, is listed below:

Introduction to Virtual Reality
VR Scenes and Objects
VR Software Development

​The pacing of the content will be about 2-3 hours per week. The timeline for these unofficial assignments are listed below.

By week 3: Finish Introduction to Virtual Reality
By week 4: Finish Animations in Scenes and Objects
By week 5: Finish Scenes and Objects
By week 6: Finish Controlling Objects Using Code in Software Development
By week 7: Finish Programming Animations in Software Development
By week 8: Finish Software Development

Basic programming experience is highly recommended. An experience level of having completed CS 106A will suffice for most of the topics covered, and an experience level of having completed CS 106B/X is ideal.

Date and Time: 
Sunday, October 29, 2017 - 5:00pm to 6:00pm
Venue: 
Packard 064

VR/AR Community presents 'The Design Language of Mixed Reality'

Topic: 
The Design Language of Mixed Reality
Abstract / Description: 

The Microsoft HoloLens, the first fully untethered holographic Windows computer, brings with it a new wave of holographic development. What are the challenges of Mixed Reality? What kind of apps make sense and work well? Tobiah Zarlez from Microsoft will answer these questions and more, covering the basics of the HoloLens, how it works, and how you can start developing holographic applications today.

 

Date and Time: 
Thursday, October 26, 2017 - 7:00pm
Venue: 
Hewlett 103

SCIEN colloquium: Light field Retargeting for Integral and Multi-panel Displays

Topic: 
Light field Retargeting for Integral and Multi-panel Displays
Abstract / Description: 

Light fields are a collection of rays emanating from a 3D scene at various directions, that when properly captured provides a means of projecting depth and parallax cues on 3D displays. However due to the limited aperture size and the constrained spatial-angular sampling of many light field capture systems (e.g. plenoptic cameras), the displayed light fields provide only a narrow viewing zone in which parallax views can be supported. In addition, the autostereoscopic displaying devices may be of unmatched spatio-angular resolution (e.g. integral display) or of different architecture (e.g. multi-panel display) as opposed to the capturing plenoptic system which requires careful engineering between the capture and display stages. This talk presents an efficient light field retargeting pipeline for integral and multi-panel displays which provides us with a controllable enhanced parallax content. This is accomplished by slicing the captured light fields according to their depth content, boosting the parallax, and merging these slices with data filling. In integral displays, the synthesized views are simply resampled and reordered to create elemental images that beneath a lenslet array can collectively create multi-view rendering. For multi-panel displays, additional processing steps are needed to achieve seamless transition over different depth panels and viewing angles where displayed views are synthesized and aligned dynamically according to the position of the viewer. The retargeting technique is simulated and verified experimentally on actual integral and multi-panel displays.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Packard 101

EE380 Computer Systems Colloquium: Petascale Deep Learning on a Single Chips

Topic: 
Petascale Deep Learning on a Single Chips
Abstract / Description: 

Vathys.ai is a deep learning startup that has been developing a new deep learning processor architecture with the goal of massively improved energy efficiency and performance. The architecture is also designed to be highly scalable, amenable to next generation DL models. Although deep learning processors appear to be the "hot topic" of the day in computer architecture, the majority (we argue all) of such designs incorrectly identify the bottleneck as computation and thus neglect the true culprits in inefficiency; data movement and miscellaneous control flow processor overheads. This talk will cover many of the architectural strategies that the Vathys processor uses to reduce data movement and improve efficiency. The talk will also cover some circuit level innovations and will include a quantitative and qualitative comparison to many DL processor designs, including the Google TPU, demonstrating numerical evidence for massive improvements compared to the TPU and other such processors.

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: NLV Agents

Topic: 
Deep Learning in Speech Recognition
Abstract / Description: 

While neural networks had been used in speech recognition in the early 1990s, they did not outperform the traditional machine learning approaches until 2010, when Alex's team members at Microsoft Research demonstrated the superiority of Deep Neural Networks (DNN) for large vocabulary speech recognition systems. The speech community rapidly adopted deep learning, followed by the image processing community, and many other disciplines. In this talk I will give an introduction to speech recognition, go over the fundamentals of deep learning, explained what it took for the speech recognition field to adopt deep learning, and how that has been contributed to popularize personal assistants like Siri.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 29, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Partisan Gerrymandering and the Supreme Court: The Role of Social Science

Topic: 
Partisan Gerrymandering and the Supreme Court: The Role of Social Science
Abstract / Description: 

The U.S. Supreme Court is considering a case this term, Gill v Whitford, that might lead to the first constitutional constraints on partisanship in redistricting. Eric McGhee is the inventor of the efficiency gap, a measure of gerrymandering that the court is considering in the case. He will describe the case's legal background, discuss some of the metrics that have been proposed for measuring gerrymandering, and reflect on the role of social science in the litigation.


 

ABOUT THE COLLOQUIUM:

See the Colloquium website, http://ee380.stanford.edu, for scheduled speakers, FAQ, and additional information. Stanford and SCPD students can enroll in EE380 for one unit of credit. Anyone is welcome to attend; talks are webcast live and archived for on-demand viewing over the web.

Date and Time: 
Wednesday, November 1, 2017 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computing with High-Dimensional Vectors

Topic: 
Computing with High-Dimensional Vectors
Abstract / Description: 

Computing with high-dimensional vectors complements traditional computing and occupies the gap between symbolic AI and artificial neural nets. Traditional computing treats bits, numbers, and memory pointers as basic objects on which all else is built. I will consider the possibility of computing with high-dimensional vectors as basic objects, for example with 10,000-bit words, when no individual bit nor subset of bits has a meaning of its own--when any piece of information encoded into a vector is distributed over all components. Thus a traditional data record subdivided into fields is encoded as a high-dimensional vector with the fields superposed.

Computing power arises from the operations on the basic objects--from what is called their algebra. Operations on bits form Boolean algebra, and the addition and multiplication of numbers form an algebraic structure called a "field." Two operations on high-dimensional vectors correspond to the addition and multiplication of numbers. With permutation of coordinates as the third operation, we end up with a system of computing that in some ways is richer and more powerful than arithmetic, and also different from linear algebra. Computing of this kind was anticipated by von Neumann, described by Plate, and has proven to be possible in high-dimensional spaces of different kinds.

The three operations, when applied to orthogonal or nearly orthogonal vectors, allow us to encode, decode and manipulate sets, sequences, lists, and arbitrary data structures. One reason for high dimensionality is that it provides a nearly endless supply of nearly orthogonal vectors. Making of them is simple because a randomly generated vector is approximately orthogonal to any vector encountered so far. The architecture includes a memory which, when cued with a high-dimensional vector, finds its nearest neighbors among the stored vectors. A neural-net associative memory is an example of such.

Circuits for computing in high-D are thousands of bits wide but the components need not be ultra-reliable nor fast. Thus the architecture is a good match to emerging nanotechnology, with applications in many areas of machine learning. I will demonstrate high-dimensional computing with a simple algorithm for identifying languages.

Date and Time: 
Wednesday, October 25, 2017 - 4:30pm
Venue: 
Gates B03

Engineering Abroad

Topic: 
Engineering Abroad programs information session
Abstract / Description: 

Engineering Abroad Programs information session.

Hear about all of the opportunities you have to study abroad as a Stanford Engineering student, from Stanford Global Studies, Stanford Global Engineering, and the Bing Overseas Studies Program. Even better, hear from a panel of eight current engineering students who actually participated in these programs, and about their experiences abroad, and specifically how it contributed to their engineering studies and how they made it work for them.

Stay afterwards from 8:00 - 8:30 for a mixer (light refreshments provided) and talk to them yourself.

Date and Time: 
Monday, October 23, 2017 - 7:00pm
Venue: 
Geology Corner 320-105

Holographic Near-Eye Displays for Virtual and Augmented Reality

Topic: 
Holographic Near-Eye Displays for Virtual and Augmented Reality
Abstract / Description: 

Today's near-eye displays are a compromise of field of view, form factor, resolution, supported depth cues, and other factors. There is no clear path to obtain eyeglasses-like displays that reproduce the full fidelity of human vision. Computational displays are a potential solution in which hardware complexity is traded for software complexity, where it is easier to meet many conflicting optical constraints. Among computational displays, digital holography is a particularly attractive solution that may scale to meet all the optical demands of an ideal near-eye display. I will present novel designs for virtual and augmented reality near-eye displays based on phase-only holographic projection. The approach is built on the principles of Fresnel holography and double phase amplitude encoding with additional hardware, phase correction factors, and spatial light modulator encodings to achieve full color, high contrast and low noise holograms with high resolution and true per-pixel focal control. A unified focus, aberration correction, and vision correction model, along with a user calibration process, accounts for any optical defects between the light source and retina. This optical correction ability not only to fixes minor aberrations but enables truly compact, eyeglasses-like displays with wide fields of view (80 degrees) that would be inaccessible through conventional means. All functionality is evaluated across a series of proof-of-concept hardware prototypes; I will discuss remaining challenges to incorporate all features into a single device and obtain practical displays.

Date and Time: 
Wednesday, October 18, 2017 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - Graduate