Hi, I'm Allan. I have 10+ years of experience in product and data visualization design. Currently, I'm at NVIDIA as the head of the RAPIDS visualization team. Lately, I've been managing the team to build out and integrate GPU accelerated visualizations libraries. I'm sort of a manager / pm / designer / developer hybrid. This portfolio perpetually needs updating, but below are some highlights of older things I've worked on.
BSID · MBA Sustainability · Product Design · User Experience · Data Visualization · Information Architecture · Design Thinking · Wireframing · CAD Modeling · 3D Printing
Whiteboard · HTML · CSS · Javascript · D3.js · Three.js · React · Illustrator · Photoshop · Solidworks · Rhino
Augmenting trends research with artificial intelligence and automation.
An industry analyst's research work typically involves manual hit or miss term searching, while topic discovery and coverage is limited by what they are able find before short deadlines. Applying AI techniques, such as natural language processing and topic modeling, can greatly improve an analyst's research results.
The overall research goal was to develop an application prototype that an analyst can quickly integrate into their current work flow with minimal retraining or retooling. The prototype must also be highly scalable and easily deployed for use in consulting delivery centers with thousands of potential users.
As the lead product designer, my task was to map out the analyst's typical process, understand their goals and restraints, and develop a feature set that could integrate with our team's developed AI techniques. As this would be a technical demo as well as a prototype, I also had to design and build a front end interface that was understandable for both analysts and general business audiences.
The project began with a series of user interviews to distill the typical research process. Major pain points included: finding productive search terms while researching unfamiliar industries, the time consuming nature of parsing results for relevant information or trends, and the scatter-shot approach to saving information. Working iteratively with an AI expert, several techniques were developed that could augment and integrate into the analyst's process. Those were then used to inform multiple sketch concepts and quickly refined into a working application for user testing and AI algorithm refinement.
While several concepts were considered, the final design settled on a simple yet familiar search bar and results list interface. Integrating OneNote into the tool contained all action within the browser and provided a streamlined way to record information lineage within projects. Now an analyst could start research in a new industry, be immediately presented with relevant search term suggestions, have the results quickly summarized into major topics, and then see those topics in the context of trends.
The prototype application was well received and its code base transferred for production. The project also provided an exemplary use case for client discussions around applying human centered design principles to guide the application of AI capabilities. The key to a successful design relies on understanding where AI is best suited to augment a process, and where processes need to be changed to augment the AI.
A key challenge, when incorporating AI as another stakeholder in the design process, is the uncertainty of its abilities until later stages of the development. Often, it can function much like a black box. However, the successful implementation of an AI system can make user interactions much more natural. As the system takes on a greater cognitive load, the user is then free to focus more on their primary task.
The agenda of applying AI's capabilities to augment research activities is ongoing, with projects investigating various interfaces and techniques for trend detection and utilization.
Capturing, visualizing, and interacting with data in 3D.
The emergence of consumer VR and the refinement of 3D engines, such as Unity and WebGL, has created an opportunity to use a previously inaccessible medium to make complex spatial data as intuitive as interacting with real world objects.
Spanning multiple exploratory projects, the research began by experimenting with browser-based 3D data rendering techniques and graduated to room scale immersive data capture and interaction. The overall goal was to better understand the medium's potential, determine its technical capabilities, and to find what skill sets would be required to utilize and design for this technology.
As the lead designer and researcher, my role was to guide the research, formulate the deliverables, and build the proof of concept demonstrations with fellow research and developers.
The important opening for a proposal of analytics capabilities to C-Suite telco executives, the interactive 3D globe was meant to intuitively communicate the way-past-human scale of their data. The ability to provide this level of 3D rendering within a web browser, as well as the positive reception of the visualization, prompted me to investigate further applications for the medium.
As 360 videos become more prevalent, providing feedback on viewer engagement is particularly important to aid content providers of this new medium. Building off of the capabilities from the globe demonstration, this project tracked and visualized user gaze while they viewed a 360 video in a Samsung Gear VR. The red triangles represent the travel of viewer's gaze. Sections in the video where multiple viewer's gaze converge correspond to spikes in the red line chart below, revealing areas of interest or frustration during the video.
The techniques demonstrated for the 360 video fostered interest in developing a room scale interaction framework for VR. The project centered around a use case of logging and analyzing a user's exploration of shelf displays in a virtual store. While presenting many technical challenges, the most interesting was experimenting with what combinations of visualizations shown in their native 3D environment worked most effectively to reveal user and group behaviors.
The room scale VR analytics framework has been rolled into the catalog of offerings available to clients, while several startups have begun to offer similar techniques for 360 video behavior tracking -validating our approach. More importantly, the techniques and skill sets required to utilize this new medium are better defined, so future projects can begin with a stronger foundation.
The most challenging aspect of working with this new medium is the lack of design tools to aid the process, as most must be custom built or approximately hacked. Yet, spatial interaction is especially reliant on iterative and early prototyping to get it right. The impact when done well is remarkable and unlike anything previously experienced.
WebGL and VR technologies are still in their infancies, though already have great future potential. With the advent of more advanced solutions and the introduction of AR, I feel this is just the beginning of a whole new field of design and interaction.
Providing insights with bespoke data visualization applications.
Visualization is one of the most effective ways of communicating patterns and insights from data sets. The recent proliferation of visualization tools on the web has rapidly increased its accessibility, yet an understanding of best practices or effective application is often lacking. Consequently, a program was necessary to develop expertise around data viz through several proof of concept applications.
Spanning several projects, the data visualization and analytics research program focused on intense, short term application builds with clearly defined problems and data sets. Most involved interactive web based interfaces because of the high level of control those tool sets enabled, as well as their ease of accessibility.
As the lead designer for most projects, I was tasked with ensuring visualization best practices were adhered to, such as the appropriate use of design principles and data mapping, in addition to designing for good user experience and interaction. I also had to make sure the applications were effective as technological demonstrations and aligned with internal research agenda priorities.
As one of the earlier proof of concepts used to generate buy in for future projects, the visualization had to use a data set and chart types that were relatable for a wide audience, but also able to guide them towards interesting insights from which they could deduce stories. In such cases, geospatial maps are ideal because they are quickly understandable and have high information density. Visually, the goal was for the data itself to be the focal point and the source of interaction, with supplemental information receding to the background.
Designed to make public home loan financial data understandable to a general audience, the series of three charts provided compelling insight into patterns behind the subprime mortgage crisis. The geospatial chart emphasized regional patterns of loan delinquency, including an interesting pattern of strategic defaulters, as well as the dramatic increase during the market crash. The 3D bubble plot concentrated on quickly providing a high level comparison of selected bank's entire loan portfolio health and strategy. The network graph demonstrated how loan type similarities converged over time, as standards became more lax, exposing loan portfolios to greater hidden risk.
A key feature of the risk analysis application is analyzing the same data set from three different chart type perspectives. This enables an audience to have a deeper view of the data's structure and affords them a much richer understanding of underlying patterns.
Examining Resource Description Framework (RDF) data models can be cumbersome, particularly when they require complex queries to retrieve information and results are shown in poorly contextualized tables. The graph explorer tool was designed to simplify the process by binding queries to a more intuitive network graph visualization that utilized a point and click navigation for query selection. With this method, a user without technical expertise could access and explore an RDF database intuitively and quickly.
The visualization agenda and demonstrations were highly praised and well received, forming the basis of countless client proposals and capability presentations. It enabled a core group of visualization specialists to develop highly valuable skills and experience, as well as provide a reference benchmark for other groups within the firm.
The hands on nature of creating several data visualization applications provided key experience that helped ensure successful delivery of future work. Some especially valuable insights include a realization that 80% of a project's effort will go towards gaining access to, correctly formatting, and understanding the data before any visualization application can begin to be developed. This ties into the need to connect the data to visualizations as quickly as possible, because of the uncertainty of what it might reveal. The highly indeterminate and iterative process, therefore, requires a strong team with a broad mix of specialty skills.
As the labs born visualization capability was so successful, it has since been rolled into the design practice where it now works towards developing custom C-level visualizations for high value projects.
Teaching visualization best practices through extensive course material and hands-on workshops.
As tools like Tableau and Qlik become more prominent for business, the skills needed to effectively use data visualizations increased in demand. While some tutorials and guides were available, information was scattered and incomplete. Simple templating or rote based approaches would not be sufficient for the numerous client demands of such a large company. Consequently, the Visual Literacy Curriculum (VLC) was commissioned and quickly took on a design principle centered approach.
Because of the massive scale of the firm and the need for improved visual literacy across all employee career levels, a multi-vector approach to training was necessary. Course work and materials would have to be accessible online and self guided, supplemented by strategic, in person workshops to skill-up groups that could then promote best practices within their regions. Finally, an online community would be established as a centralized repository for resources and skilled personnel information.
As one of the original members of the VLC design team, I was tasked with planning the curriculum structure, deciding on its pedagogy, and aggregating and creating all training material. Additionally, I personally participated in planning and implementing over 10 workshops all over the globe.
The Visual Literacy Curriculum (VLC) is comprised of a large internal online community, multiple online courses, and a hands on workshop. Centered around the themes of data representation, design principles, and storytelling, the curriculum was developed to improve a person's ability to understand, evaluate, and create effective data visualizations. Emphasizing an iterative, sketch driven design process, the VLC also helps trainees communicate in design terms, critique work, and create visualizations specifically tailored to their user's needs and data.
The VLC course material revolves around three sections. Data Representation details how to map data to combinations of visual elements, such as position or area, how these relate to perceptual accuracy, fundamentals of chart components and chart selection, and how to critique. Design Principles references the application of abstract concepts, such as proximity and emphasis, to control design attributes like white space and visual hierarchy. And lastly, Storytelling with Data introduces the idea of the persona, presentation context, and techniques for arranging story elements to form an appropriate narrative for a determined audience. All this is framed to familiarize trainees with a common design language and technique.
The workshop is typically an intensive, on-site, 2 day event where around 20 participants are guided through applying the online course material. Lectures are kept at a minimum, favoring an interactive data visualization development process as participants work through short case studies. Sketching, white boarding, and paper prototyping are stressed as the favored presentation medium, often with surprisingly effective results.
The VLC had a successful roll out and was one of the earlier company wide pushes towards embracing design thinking. In the latest totals, over 200 had participated in the workshop, over 1000 had completed all three online courses, and over 9000 had joined the continually evolving online community. Several individuals were certified as trainers and continue to run VLC workshops in their regions.
The hardest challenge to training individuals is getting them to absorb course material. Course work can only go so far, but one of the most interesting insights came through the refinement of the workshops. Initially there were several sections requiring computers to create and present visualizations, but these proved to be cumbersome. Very quickly we simply replaced computers with more sketching, and this was surprisingly well received as presentations and concept retention vastly improved.
The VLC is evolving to fit new and changing demands. A recent offshoot is the Data4Designers workshop, now being held across the world, in which the emphasis is placed on data structures and incorporating data as a stakeholder in the processes of experienced designers.
Co-Inventor · “System Architecture for Control Systems via Knowledge Layout Search.”
Accenture Point of ViewContributor · “Accelerating Understanding Through Data Visualization.”
Accenture Latest ThinkingContributor · “Discover the Benefits of Data Visualization for Business and the Role of Process, People, and Technology in the Age of Big Data.”
Say hi. Discuss design thoughts. Ask about sustainability. Or maybe share your favorite Star Trek TNG episode.