In late October of 2015, I attended the HTML5 Developer Conference in San Francisco which is considered to be one of the largest gatherings of developers working in the web tech area and features many renowned speakers. After presenting my key takeaways internally to colleagues and externally at SAP Inside Track Walldorf, I realized that I should share my leanings with a bigger audience and decided to summarize my impressions and selection of talks as an SCN blog post.

Overall it was an exciting experience for me, as I got the chance to meet tech stars and peers from all over the world and gained insights on latest trends relating to Web Performance, Protocols (HTTP/2, WebSockets), IoT/WoT, UX and some other hot topics. Apart from conference talks it also included training courses aimed for getting hands on experience through live coding sessions. Talks were mostly held at Yerba Buena Center For The Arts and Metreon Center which had  a great view of San Francisco that was well captured in this photo by Robert Dawson

https://pbs.twimg.com/media/CRswqTCUAAAuD3E.jpg:large

Design + Performance (Slides)

Steve Souders, SpeedCurve

This was an interesting talk by Steve Souders, a well-known expert for web performance related topics and former Chief Performance Engineer at Google/Yahoo. Currently he works for SpeedCurve, a front-end performance monitoring tool.

Steve started his talk by explaining the importance of having Inter-disciplinary teams. He suggests to bring together designers and developers and increase collaboration between them already in the initial stage of the project (rather than starting with designers producing the concepts in isolation and then developers taking over). This will help to produce “non-reckless” designs that consider the trade offs of performance and design.

The following are some of the main guiding principles that Steve presented for measuring/improving site performance.

1. Define Performance Budgets to track the progress and get alerts whenever limits exceed. Here’s a good blog about this concept.

in page reminders 2222.png

2. Introduce in-page metrics (visible only internally) as constant reminders to the team on how the performance is going and alert in case there are any “performance budget” violations or regressions. On the example of Etsy, Steve explained how this kind of small changes can help to establish “culture of performance”.

3. Do not use window.onload as a performance metric as it’s not suitable for dynamic behavior, preloading, lazy loading. Look rather at some other metrics that capture better the rendering experience like Speed Index, which is the time at which the 50th percentile pixel got painted.

4. Most importantly, define your own custom metrics as there is no all-purpose metric. By Custom Metrics he means defining the design elements that matter most to the user experience (e.g. twitter’s time to first tweet). These can be measured through the User Timing API which is part of W3C Web Timing API and helps to identify the hot spots in the code. It allows to register time measurements at different places in JavaScript that will be stored in the browser. The two main concepts are: Mark(timestamp) and Measure(time elapsed between two Marks). Web timing API also includes Resource Timing and Navigation Timing API.

For tracking custom metrics there are two types of website monitoring solutions: Synthetic and Real-User Monitoring (RUM).

  • RUM tools gather performance metrics directly from end-user browsers through the embedded JS beacons and collect insights on how people use the site (environments, browsing paths, etc.).
  • Synthetic tools simulate actions that users make and measure metrics like response time, load time from different locations (e.g. WebPageTest, SpeedCurve, DynaTrace).

You might want to checkout Steve’s famous 14 Rules for Faster-Loading Web Sites,  if you haven’t heard about them yet, but beware that in the context of HTTP/2 few of them are already viewed as anti-patterns.

Measuring Web Perf? Let’s write an app for that!!! (Slides)

Parashuram Narasimhan, Microsoft

This was another performance related session that evolved around the idea that performance needs to be treated as a feature and similar to any other feature it must have automated tests running against every build and must be monitored for regressions.  Making automated web performance measurement part of continuous integration allows to collect metrics and show trends of how the application works across multiple commits and understand which exact commits have introduced performance regressions.


perf is a feature.png

Parashuram showed how this task can be done using his open-source browser-perf web performance metrics tool. The tool collects (e.g. from Chrome DevTools Timeline panel) tracing information (e.g. frame rates, layouts, paints, load time) that is obtained by mimicking real user actions (e.g. through Selenium) and monitor site performance for every commit as part of CI process. An alternative to browser-perf is Phantomas that is built on top of PhantomJS.

In the final part of his session, Parashuram talked about monitoring performance trends of web frameworks. Inspired by Topcoat’s example of integrating performance tests into daily commit process, he has started to experiment with analyzing performance trends over major releases of different JavaScript frameworks by plotting metrics like frame rates, style calculation times, layouts, etc. Of course, a natural question for me was whether he has made such analysis also for OpenUI5 and have found any interesting trends. So when I approached him after his session he told, that he has already done that, but unfortunately had no time to give any details as he had to leave early. So I hope to find some time soon and try out myself

The “entertainment factor” of this talk was also high, as Parashuram decorated his slides with cute and funny Stormtroopers 🙂

NextGen Web Protocols: What’s New? (Slides)

Daniel Austin, GRIN Technologies

The topic of HTTP/2 is becoming more and more popular these days as it helps to achieve up to 60% performance gains by addressing shortcomings of HTTP1.1 (which was not upgraded since 1999). So I decided to take the chance of getting an overview of it and attended this session.

Here are some quick facts about HTTP/2 that I learned from Daniel’s talk:

– The main goal of HTTP 2.0 is to reduce HTTP response times. It improves the bandwidth efficiency and not latency!

– HTTP’s semantics remain unchanged to avoid compatibility issues.

– It is based on SPDY, which was proposed by Google as a wire format extension to HTTP in 2011.

– Was standardized by IETF on May 14, 2015 as RFC 7540

– Though the standard does not require TLS, browsers support HTTP/2 only if TLS is in use. So all HTTP/2 enabled sites will be using HTTPS.

– Implementations:

Servers: Akamai Edge servers, F5 BigIP, Apache (mod_h2), Nginx, MS IIS (Windows 10)

Clients: Chrome, Firefox, Safari 9 (+ apps!), CURL, MS Edge (Windows 10)

At a high level HTTP/2 introduces the following changes:

  • Binary instead of textual format, which means that it does not need any parsing and is more compact. But it also means that debugging is trickier and one will need tools like Wireshark more often.
  • The number of physical HTTP connections is reduced to just one and instead of multiple connections we have streams that are divided into control and data frames (multiplexing). As these frames do not need to arrive sequentially, it solves the issue of Head-of-line blocking.
  • The number of bytes and (logical) messages sent gets considerably reduced through mechanisms like header compression, server push.  Headers are compressed using HPACK specification which uses two main methods: 1. Differential encoding: in the first request the header’s full information is sent, but in subsequent requests only the difference with the first one 2. Huffman coding to further compress the binary data. Server Push approach enables server to “push” multiple responses to client’s first request suggesting what other resources it might need. This helps to avoid the unnecessary round trips and waiting of server till client will parse and discover further dependencies.
  • Besides, HTTP/2 also prioritizes both messages and packets for queuing efficiency and improves caching.

Daniel shortly talked also about some other recently developed protocols like QUIC, which uses UDP instead of TCP/IP and also Scratch that is actually proposed by Daniel himself. These protocols are still in experimentation phase.

WebSocket Perspectives 2015 (Slides)

Frank Greco and Peter Moskovits, Kaazing Corporation

This talk provided some interesting insights about WebSockets in the context of IoT/WoT, cloud connectivity and microservices’ transports.

human vs iot.png

The terms “Web of Things” and “Internet of Things” are sometimes used interchangeably, but making a distinction is actually important. Frank defines IoT as “Embedded computing endowed with Internet connectivity” and WoT as “Application and Services layer over IoT”, similar to Internet (network layer) vs Web (application layer). IoT relates more to the connectivity aspects, which is not sufficient without formal APIs, protocol standards and common frameworks. An interesting observation presented by Frank, was that the data flow model for human web and WoT is quite different and hence we need to rethink which protocols and architectures we use for the new model.

As another context, they mentioned Microservices. In scenarios with hundreds of microservices with REST-based calls, there is a lot of additional latency in the overall architecture as we have to wait for replies. So switching to async approach might be better in this sense as it will also increase the scalability.

Furthermore, using WebSockets can also be advantageous in the context of Hybrid Cloud Connectivity, where cloud services require frequent, on-demand and real-time access to on-premises.

In such event-driven world a question arises whether HTTP is the right choice as a web communication protocol, because it has many disadvantages in above mentioned scenarios such as inefficient consumption of resources and bandwidth, real-time behavior simulated through workarounds  like resource intensive polling, AJAX/Comet.

WebSocket protocol addresses many of these limitations by providing full-duplex persistent connection. However, it is important to understand that WebSocket is a peer protocol to HTTP and they can be used also in combination to take advantages of caching mechanisms, CDN and other benefits of HTTP.

The protocol has been standardized by IETF as RFC 6455 and its JavaScript API is currently being standardized by W3C. All modern browsers already support WebSockets and there are many server side implementations both commercial and open source.

Falcor: One Model Everywhere (Slides)

Jafar Husain, Netflix and TC39 (JavaScript Standards Committee)

Falcor is a new JavaScript library open sourced by Netflix that provides data access mechanism with following benefits:

  • optimized way of requesting as much or as little data as we want in a single request
  • asynchronous mechanism of fetching the data for populating the UI as soon as it’s there
  • flexibility to treat the data as a single unified JSON even though its segments are retrieved from multiple data sources.

Similar to most web applications, domain model of Netflix is a graph and it is not possible to represent the graph as JSON object (which has a tree format) without duplicates. To avoid this problem Falcor introduces JSON Graph convention which basically does the following “Instead of inserting an entity into the same message multiple times, each entity with a unique identifier is inserted into a single, globally unique location in the JSON Graph object.” Another utility used by Falcor is the server-side Router. When requesting a portion of a JSON model they match it against a certain route and the routes are made not through URLs but paths in the JSON document. This creates the illusion of having one single model served from multiple resources as in this way for each route the data requests can be delegated to different data sources.

Falcor does not have powerful dynamic query mechanism and compared to JSONPath it’s rather limited, but it enables optimizing queries that are expected and happen more often.

As part of HTML5DevConf Jafar offered also a training called “Async Programming in JS”. In this talk he summarizes main points of the topic which is actually quite interesting and I would recommend to check out. He also has the same course on FrontendMasters, egghead.io and has put the exercises online which he used during the training.


Drunk User Testing (Slides)

Austin Knight, HubSpot


theuser is drunk.png

This was a fun talk about unconventional user testing strategy that is based on “The User is Drunk”  paradigm. The underlying principle is “Your site should be so simple and well-designed that a drunk person could use it.” Some people took this concept so serious, that they even make money by conducting such tests (UX expert Richard Littauer http://theuserisdrunk.com/ who has also set up http://theuserismymom.com/). You can find the fundamental concepts of this methodology in Austin’s blog

He also suggests to give high importance to creation of UX culture within a company. According to him this can be achieved by following below principles:

  • Everyone is a UX Designer.
  • Involve your Designers and Developers
  • Fall in love with problems, not solutions
  • Listen to sales and support calls
  • Get your hands dirty

UX Super Powers with #ProjectComet (slides)

Demian Borba, Adobe

design thinking.png


Unfortunately I missed this session while I was attending a parallel one, but I found the topic interesting and learned more about it through the slides that were posted online.

Demian Borba is a Product Manager at Adobe working for Project Comet, a UX design and prototyping tool that will arrive this year. Inferring from the slides he did not talk merely about the tool, but also the underlying UX concepts and Design Thinking methodology developed at The Hasso Plattner Institute of Design at Stanford (a.k.a “d.school”).

The iterative process of this methodology is well summarized in this image which I found here. Although in my current role I don’t work directly with the UI and don’t make any design decisions, I believe embracing this mindset is crucial for me and for anyone who works in development, as we all eventually have an indirect impact on end-user’s experience.

In his presentation Damian gave also some book recommendations that look very promising and I hope to read soon.

Creative Confidence” by IDEO founder and d.school creator David Kelley and his brother Tom Kelley

Mindset: The New Psychology of Success” by Carol Dweck (Fixed vs Growth Mindset => praising abilities vs effort)

Ten faces of Innovation” by Tom Kelley

Building Web Sites that Work Everywhere (slides)

Doris Chen, Microsoft


Doris talked about fundamentals of cross-browser website development and presented testing tools that check whether the site successfully displays across different browsers/devices/resolutions. The list included:

  • Site Scan – Reports back on common coding problems
  • Browser screenshots – Take screenshots of your site in a selection of common browsers and devices
  • Windows virtual machines – free downloads of Windows virtual machines used to test IE6 – IE11
  • BrowserStack – A paid online service that gives you access to hundreds of virtual machines

One of the main messages of her presentation was that feature detection should be always preferred over browser (navigator.userAgent) detection, as it is more reliable. Microsoft in general recommends to use Modernizr for this task as it detects all major HTML5 and CSS features. She also talked about Polyfills as a means to interpret standard API to avoid rewriting the code.

Prototyping the Internet of Things with Firebase (Slides)

Jennifer Tong, Google

Jenny did a great job showing how easy it can be to make a simple IoT project using JavaScript. In her demo she used node.js, Firebase, the Johnny Five library, and boards like Raspberry Pi.

Firebase, is a Google acquired company providing following cloud services: realtime database + Hosting + Authentication. The RB service allows application data to be synchronized across clients and stored on Firebase’s cloud. The REST API uses the Server-Sent Events protocol, which is an API for creating HTTP connections for receiving push notifications from a server. In contrast to WebSocket protocol with SSE client cannot send push notifications, but its advantage is that it uses HTTP connections and no additional setup is needed. See Firebase in action in Real-time map of San Francisco bus locations

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply