Google Analytics for the Physical World

How do you explain what reelyActive does? Early in our history, we discovered that describing our platform as “Google Analytics for the Physical World” resonated well with those who were familiar with Google’s tool, typically used for understanding and optimising website traffic.

Exactly two years ago, motivated and inspired from our success at the Startup World Finals, we asked ourselves: why don’t we literally create Google Analytics for the physical world? In other words, why not directly connect our real-time location platform with the tool?

Exactly one year ago, once our open source software stack was sufficiently advanced, we finally did it. Turns out it only required a few trivial lines of code!

Now that we’ve collected nearly a year’s worth of data from Notman House and pilot projects at two local cafés, let’s see how Google Analytics for the Physical World is working out.

A real-time view in the tool’s dashboard shows the current number of visitors of a website. In our case, it shows how many unique devices (smartphones, wearables, etc.) are emitting Bluetooth Smart packets detected by our sensors. The image below shows 11 detected visitors at both cafés at 7h30 in the morning.

Mon cafe bistro live

But it’s in the session histories that we start to identify interesting events that would allow for actionable intelligence. Take for instance a week of “session” data from the cafés. We clearly see Sunday and Saturday on the bookends as having unstructured attendance. However, from Monday to Friday we can easily identify the breakfast, lunch and dinner peaks.

Weekly sessions at Mon cafe bistro

Now let’s look at the following week. What happened on Monday? Oh right, that was Canadian Thanksgiving (which we celebrate on a Monday, a month earlier than our American friends, eh?). In both cases we see that Friday lunch appears to be peak business, and that can be compared with the Point-of-Sale records and anecdotal evidence from the staff.

Weekly sessions at Mon cafe bistro

How about a different venue such as the startup accelerator at Notman House where we proposed on our first day, three years ago, that walking into a space would be like logging into that space? Well, here’s last week’s session data. What’s that spike on Thursday October 29th at 17h? It’s Startup Open House which attracted plenty of visitors.

Notman House analytics during Startup Open House

Finally, a visualisation to show visitor flow, or, in the physical world, customer journey mapping. Each cafe has two sensors, one near the POS, the other near the dining area. Here we see which sensor is more likely to detect the visitor’s device first, and then the progression between the two sensors. The red drop-offs from the starting “page” are equivalent to a bounce, or a passing visitor. Progressions in the customer journey are representative of conversions: the visitor entered the café for some time.

Visitor flow at Mon cafe bistro

An astute observer will notice that there are never any journeys from one café location to the other. This is because the session would expire before the visitor had time to make it from one to the other. Nonetheless, tools such as Web Road 66, created by our partners in the café deployments, are able to identify and analyse such cross-location visits which represent significant value for chains and franchises.

Google Analytics for the Physical World is indeed a hit (pun intended), even with its limitations. For an SME that already uses the tool for their website, what a simple progression to use it for physical presence via the seamless integration with the reelyActive platform. All we need is your Analytics Account ID to start streaming the real-time data from our servers, and, chances are, the traffic will fit within the limits of the tool’s free tier!

The next time you enter a space, look down and imagine the floor as a giant webpage and your feet as the cursor. Fear not, you’re browsing incognito! Soon though, we trust you’ll find a comfortable incentive to opt-in: our mission, after all, is to unlock the value of the data you choose to share.

A simple mantra for the IoT

In his classic TED talk from 2010, distinguished chemist George Whitesides contrasts simplicity and complexity, specifically in relation to science and technology. He describes the Internet as a series of simple, stacked elements from binary ones-and-zeros to the Google search bar. He concludes with two famous quotes, and then closes on a gem of his own.

First, Albert Einstein:

“Everything should be made as simple as possible, but not simpler.”

Second, Antoine de Saint-Exupéry:

“You know you’ve achieved perfection in design, not when you have nothing more to add, but when you have nothing more to take away.”

Third, George Whitesides himself:

“How do we make things as simple as we can, as cheap as we can, as functional as we can and as freely interconnectable as we can? If we make that kind of simplicity in our technology and then give it to you guys, you can go off and do all kinds of fabulous things with it.”

Is that not a beautiful mantra for the engineers, architects and designers of the Internet of Things? Simple, cheap, functional and freely interconnectable. Let’s get stacking!

Blogging about Tweeting about Blogging

Just now we wrapped up an hour-long #StartupChats session on Twitter discussing the importance of blogging and one’s startup (summary via Storify). And now that we’re writing about that here, we are effectively Blogging about Tweeting about Blogging as the title implies!

Why? Because that hour of human-generated 140-character content flying across the Internet made us think of the future, the Internet of Things, and what the machine-generated content might look like if technology were truly contextually aware and cognisant of the physical world.

A few weeks ago, we presented at the Ambient Intelligence conference in Waterloo and closed with the following thoughts:

  • What if (non-human) things were in complete control of what they shared and with what?
  • What if the information they shared increased their access to utility?

For instance, taking the automotive example, which was often cited at the conference, imagine self-driving cars negotiating with parking spaces and one another. Now imagine we could follow that as a Twitter conversation just like #StartupChats, but rather #ParkingChats. Of course such negotiation between inanimate objects (or, more accurately, their “avatars”) is totally foreseeable, however it is likely to employ a more machine-friendly communication mechanism. But perhaps in the future Internet of Things, an intrepid developer will create a human-friendly translator for such machine communication that may in fact prove to be rather entertaining:

SelfDriving99:   @tightparkingspace are you free?

tightparkingspace:   @SelfDriving99 why yes, I am.

SelfDriving99:   @tightparkingspace on my way baby!

IDriveMyself_123:   @SelfDriving99 perhaps you should ask @tightparkingspace again. #winning

SelfDriving99:   @tightparkingspace WTF?

tightparkingspace:   @SelfDriving99 welcome to the #SharingEconomy dear! cc @IDriveMyself_123

IDriveMyself_123:   @tightparkingspace @SelfDriving99 LOL

Clearly, we are not the first to think about such humour:

Given that this post started out on the subject of Blogging about Tweeting about Blogging, the deeper (silly) question is whether the above conversation might subsequently be discussed at a higher level via another medium? In other words, might those self-driving cars and parking spaces “blog” about their tweeting-about-parking experience?

Curiously enough, our answer today would be “perhaps”. If the goal of ubiquitous computing and the Internet of Things is indeed to maximise efficiency, one could argue that the algorithms behind efficiency calculations would indeed be dynamic and subject to improvement themselves. Hence the idea of machines blogging about tweeting about parking, with the goal to increase parking efficiency, may not be completely far-fetched. And should it come to that, one can only hope that we as humans find humour, inspiration and, dare we say, a deeper appreciation of ourselves, in those translated discussions.

Accelerator Day One: how wrong were we?

Two weeks ago, we celebrated threelyActive, the three-year anniversary of reelyActive’s incorporation. At that time, the motivation for our incorporation was our acceptance into FounderFuel, one of Canada’s premier startup accelerators. Today is the third anniversary of FounderFuel Day One. On August 13th, 2012 we met the seven other startups of our cohort at Notman House (pre-renovations) and in the afternoon gave our first formal five-minute pitch. Today, to celebrate, let’s revisit that pitch deck, have a good laugh, and see just how wrong we were about our vision, our business and the future.

Proximity Identification Platform?

While we clearly recall wrapping up, with immense pride and satisfaction, a day-long discussion among co-founders having arrived at the title “Proximity Identification Platform”, I don’t think too many people, ourselves included, would today argue that we should have stuck with that. Indeed, today we’re about “Smart Spaces”, which, among other things, supports a far more elegant shortened domain name: smartspac.es.

We were definitely wrong on the branding: people get “Smart Spaces”. Many will go so far as to add the term to their vocabulary. No such luck with “Proximity Identification Platform”. Nonetheless, we weren’t all that wrong on the concept: Smart Spaces are simply a platform for identifying people, places and things at a human level of proximity.

The Creation of Adam?

When this is your bold intro slide, it’s quite obvious that what follows will be a vision pitch. You are indeed correct if you identified this as Michelangelo’s Creation of Adam drawn as stick figures on a whiteboard, photographed and imported into the presentation. We stopped using this slide long ago when we focused on what are we doing and why. The drawing, nonetheless, was the best way we could, in 2012, represent the why without words.

We were definitely wrong on the presentation: you’d never stand up in front of hundreds of people on demo day with this painfully monochromatic hack of a slide projected behind you. However, our experience has repeatedly shown that the image is both polarising and attention-grabbing. We touched something, be it privacy concerns, self-perception (a mix of physical and digital) or simply a critique of the artwork. As time goes on, our role as humans amidst the Internet of Things becomes increasingly uncertain, making those concerns increasingly relevant and preoccupying. Why shouldn’t the fully-connected world have its own Sistine Chapel ceiling for each of us to observe, contemplate and question?

Log in to a space?

This is the slide that most people remembered from our pitch. Even months later, other members of our accelerator cohort would bring up how this slide in particular struck them and sparked the thought “wait, what if walking into work and logging in to work were the same”?

We were decidedly un-wrong on the theme (yay!), but indeed wrong, again, on the branding: a year later we launched Log in to Life which made the concept a reality, but failed to stir the same excitement. In retrospect, the term “log in” should be utterly avoided in the branding of any experience. Today, instead, we talk about Smart Space Experiences and shot a video to bring the vision to life. Are we wrong again? Almost certainly. How wrong? Both time and our audiences will soon tell.

Bluetooth Smart

Look, a slide with some colour! Here we asked our audience to take a leap of faith with us. In 2012, we achieved “proximity identification” with our proprietary keyfob tags (Active RFID). But we knew that Bluetooth Smart (aka BLE) would slowly make its way into smartphones and other gadgets that people were likely to carry. In other words, we had to convince our supporters that the totally painful, non-scalable proposition of asking people to carry our keyfobs would soon evolve into our platform identifying the occupants of a space “for free”.

We were definitely wrong on the timeline. 2013? Ha! Apple wouldn’t unveil iBeacon until the Fall of that year (which we totally capitalised on). Nonetheless, we were very un-wrong in our prediction: there are currently 3 billion Bluetooth devices shipping annually, and in many cases our Smart Spaces are actually doubly-detecting their occupants via their smartphone AND their wearable. That’s the kind of problem we could only have dreamed of having three years ago!

The Summary

You learn quite quickly that when investors ask you “who is your target market”, your answer should not be “everyone”. However, that’s effectively what we went with in this slide. So how wrong were we there?

Well, over the last three years, we have indeed deployed our platform in businesses, homes and vehicles. Our best sales channels are through systems integrators who co-innovate with us and develop on our platform often as both clients and partners. And every human carrying a Bluetooth Smart device is typically already a participant, and occasionally a user (through opt-in). Moreover, the groundwork is laid for those users to revenue-share with us, so perhaps we were wrong about labelling them as customers. In fact, we’d be very pleased if that was the only part that we really got wrong.

How about the multi-billion dollar market? Boy were we wrong about that! Just after our demo day, Cisco pivoted to become “The Internet of Everything” and suddenly both they and GE were floating around market opportunity values in the tens of trillions of dollars! Were their predictions wrong too? Surely! They’ve actually raised those predictions since!

Yes, looking back, our first formal pitch on FounderFuel day one was indeed a vision pitch. We were certainly wrong about a lot of things, as is to be expected. But were we wrong to dream about changing the human world? Were we wrong to accord so much emphasis to our vision? The curious thing about our relentless evangelisation is that it has actually fostered many deep, meaningful human relations with our partners, clients, supporters and audience. In other words, we at least made a small impact on our human world by sharing our own dreams about that world. And with that, we have the courage to press on, continuing to knowingly be wrong, hopefully again underestimating the size of the societal transformation that is upon us.

Click here to see the full presentation

threelyActive

Three year anniversary of reelyActive

Today, August 1st, 2015, marks, from an official perspective, the three year anniversary of the moral entity known colloquially as reelyActive, and more formally as 9266-5777 Québec inc. Although today we may enter our fourth fiscal year, in reality we embarked on full-time development of the project in the first days of 2012. And the name reelyActive? It was registered as a dot-com as far back as April 16th, 2011 when I, (Jeffrey Dungen, co-founder and CEO) sparked on the idea of Active RFID infrastructure in a reel configuration, connected the terms with a ‘y‘, and took advantage of the availability of the domain name.

We tell the right story to the right audience at the right time.

The above slogan is one that we began using in the Fall of 2014 (in yet another attempt) to explain what reelyActive does. I’m proud of that slogan because I feel it extends elegantly to all types of societies, not just those of low-power wireless devices, or humans. But before we digress, let’s put that slogan to the test, because chances are that if you, the audience, have read this far, you’re genuinely interested in our story on this, our anniversary.

In the summer of 2010, in an unmemorable food court café, I sat down opposite a very striking man with the most ridiculous beard. It was a curious moment, because I too had a most ridiculous moustache. There we were, two people at a turning point in our own lives, endowed with intentionally-grown fur on our faces, meeting for the first time. It felt like our whiskers tingled simultaneously and without words, we immediately knew our paths had crossed for a reason. And when those words came, they brought confirmation. “Imagine there were transparency across the world” proposed the bearded man “because you could see anywhere in real-time”. That man was Drew Sechrist, the founder of Koozoo (neé Kuuzuu), which proposed a crowdsourced network of cameras pointed at public spaces all over the globe to foster transparency and real-time understanding through computer vision.

None shall shave!

The facial hair, shown above, lasted until Christmas of 2010 when, upon seeing a photo of my bare face, Drew had his beard professionally removed, knowing himself victorious in the unofficial man-test.

Drew and Koozoo made me believe that it was indeed plausible, if not possible, to actually achieve global transparency and understanding through technology. And for that I owe him a great deal. He chose video, because it was tangible. With my technology background, I would have chosen radio. And for that part of the story we’ll need to go back several more years…

A few days after the 2004 New Year, another bearded man sat down to speak with me. Only this time it was in a lab at École Polytechnique Montréal, the man was my research director, Jean-Jules Brault, and his beard truly suited him well. “Jeff” he said to me (in French), “there’s this physicist who lost his luggage and wants to build a wireless system for precisely locating and tracking luggage. He’s looking for an embedded developer”. Cool. The only problem was that I didn’t believe in the technical feasibility of the proposed means of Time-Difference Of Arrival (TDOA) measurement core to the project. “Don’t worry” he reassured me, “there’s government funding specifically for such experimental development”. Cool. That would be my initiation to the incredible fiscal advantages of Scientific Research and Experimental Development (SR&ED) in Canada, especially in Québec.

Coding with the baggage

And that was the start of my journey with Purelink Technology inc. which would last until 2010 and would take me to exotic places such as the baggage sorting facility at YUL, shown above, where we would sit on plastic bins, coding and debugging our Real-Time Location System (RTLS) using concrete barriers as makeshift tables. Smiling in that photo is a young Pier-Olivier Genest who would later become not only a reelyActive co-founder, but also the father of an epic beard (yes, there’s a theme here). The other co-founder of reelyActive, Traian Antonescu would also hail back to the days of École Polytechnique Montréal and Purelink Technology inc. where he would consistently show his resourcefulness, not only in the design and implementation of radio circuits, but also in the design of startup sleeping accommodations, as shown below.

Startup Sleeping Accommodations

Purelink made us believe that it was indeed plausible, if not possible, to identify and locate everyday objects using low-power wireless technology. And while we didn’t always agree with Emerson Nerat, the founder, on strategy, I commend him fully on his tenacity, which can only be described as being in a class of its own, and has inspired me not to give up on reelyActive through the many hardships over the last three years, and surely those to come.

Which brings us back to today, the third anniversary of reelyActive. Having survived, for lack of a better term, these past three years, we can look ahead with plenty of optimism that there will indeed emerge a global network for the identification and location of low-power wireless devices embedded in everyday objects that will enable a real-time understanding of the world, laying the foundation for a massive gain in efficiency that will benefit not only humanity, but also everything with which we share this planet. Yes, that sounds bold, but at least now we’re backed up by heavyweights such as Jeremy Rifkin making similar arguments, and the 3 billion Bluetooth Smart devices shipping annually in smartphones, wearables and everyday objects that can already be identified and located by our distributed sensor infrastructure. Plausible has been replaced with possible, and we’re well on the way towards probable.

We unlock the value of the data you choose to share.

The above is another slogan we began using in the Fall of 2014 (in yet another attempt) to explain why reelyActive matters. I’m perhaps even more proud of that slogan because, to me, it encapsulates how the Internet of Things empowers the individual within a society, be it human or otherwise. Sure, you’re likely left scratching your head at how that actually happens. But if we put the technology aside for a moment, imagine your next trip across town where you hop on and off sidewalks, buses, private cars, taxis, bicycles, (hoverboards?) arriving as efficiently as possible. All it takes is real-time understanding of space and the vehicles that occupy it, including their state, which their operators can simply choose to share.

This Fall, a colleague will be moving his family from Europe to join us in Montréal. When we first spoke, he highlighted that we were the only IoT startup he found, after extensive research, with a “user-centric value proposition”, referring to the aforementioned slogan. Another colleague is moving his family from South America to join us in “working toward a more efficient world”. Our team is growing with world-class talent rallied around a common vision. And that is, by far, what makes me the most optimistic about what can be accomplished over the next twenty years, a period of time following which we will again be able to mash up a number up with our name: twenty-threelyActive. A man can grow a hell of a beard in that time!

Smart Spaces in 137 Seconds?

Just over a year ago we published a blog post entitled “Beyond the Beacon: BLE Just Got Reel where we showed how our technology could detect and identify iOS7 devices using Bluetooth Smart technology. The video from that post has been viewed over ten thousand times, is featured in this GigaOM article and helped us land an opportunity to present at Bluetooth World in April 2014.

The above video crams into 137 seconds all of the progress we’ve made since then. And this blog post will help direct the curious viewer to all of the little bits which together comprise what’s shown in the video.

What are those black and white hardware devices?

We call them reelceivers, we designed them from the ground up, and what they do is listen for wireless advertising packets from nearby devices. In other words, they detect and identify things like beacons, wearables, smartphones and our active RFID tags. Here’s how to find out more:

What’s the open-source software running on the PC?

It would have taken way too much time to show it all in the video, so we’ll direct you to everything here:

  • barnowl is our middleware package which interfaces with the hardware
  • hlc-server is a contextual API built around barnowl
  • smartspaces is the webpage you see in the video, including the server-side part behind the scenes
  • Make a Smart Space is the tutorial on diyActive that brings it all together, and is your best starting point (and yes, it features the video too!)

What’s the “one API call” mentioned in the video?

You just ask hlc-server what is /at/place (where place is a human-friendly location name) and it will return to you:

  • a list of all the devices that are present
  • a processed version of whatever they sent in their last wireless transmission(s)
  • a link to any data associated with that device (for example: this is Jeff’s JSON)

That last one is what makes the technology so powerful. Anyone can associate digital information with their wireless device: hlc-server just makes the link between the unique device identifier (for instance the MAC address) and a URL which lists all the data in JSON format. Have a look at the Hyperlocal Context page of our website and if you’re still keen to know more, read our scientific article: Hyperlocal Context to Facilitate an Internet of Things Understanding of the World.

What are all the devices shown on the screen?

In no particular order:

Yeah, Smart Spaces are detecting more and more of the billions of Bluetooth Smart devices shipping every year! hlc-server can determine the device type of most of these based on either the UUID it transmits or the companyCode. If you’d like your device to be recognizable too, please contact us and we’ll include it in the next build.

Is there some way I can see that website live?

Yes, you can! Check out smartspac.es/reelyactive for live hyperlocal context from our office (which is slowly becoming a museum of trophies and prototypes as well…). Or smartspac.es/notman for live hyperlocal context from Notman House in Montreal, the place where we first experimented with Log in to Life, the precursor of Smart Spaces.

How did you make the Nexus 5 work with the technology?

Okay, this answer will be a bit technical, bear with us:

  1. the Bluetooth Smart reelceivers regularly send ADV_DISCOVER_IND packets
  2. smartphones (both Apple and Android) are curious and send SCAN_REQ packets in response to learn more about the Bluetooth Smart reelceivers
  3. those SCAN_REQ packets include the 48-bit advertiser address of the smartphone
  4. the Nexus 5 on Android 4.4.4 uses a public advertiser address (in other words it doesn’t change)
  5. therefore, whenever the Nexus 5 scans for nearby Bluetooth Smart devices, and a Bluetooth Smart reelceiver is around, the reelceiver uniquely identifies the Nexus 5

So, in the web interface of hlc-server we simply associate the advertiser address of the Nexus 5 with a URL containing JSON data and it works! Note that iOS devices can also be identified in this manner, but they change their advertiser addresses every 15 minutes or so which makes this technique pointless (but there are alternatives!).

How do you aggregate the tweets of everyone present at a location?

In smartspaces, we use the Twitter handle of every person detected to load their most interesting tweets and then cycle through them in the “Social View” for roughly 24 hours. This works when people opt-in with a compatible device and share their Twitter handle, and it’s reely, reely cool at places like coworking spaces!

Could you really decide what song to play based on the preferences of everyone present?

We’ve been throwing around this idea for two and a half years and nothing would make us happier than for someone to make it a reality using our platform. All of the ingredients are finally there, so please be the one to make it happen (and let us know when you do!).

Why is there so much audio static in the video?

This is best explained as a MasterCard commercial:

Digital SLR: $800. Wide-angle Japanese lens: $750. Directional microphone: $100. Not realizing that the 99 cent AA battery in the mic was almost dead until after everything was filmed: PRICELESS.

Thanks for watching and reading and stay tuned. There’s plenty more in the works!

The Age of Hyperlocal Context

In their book, The Age of Context, Robert Scoble and Shel Israel examine how contextually aware technologies will impact our daily lives. The video above chronicles our CEO, Jeffrey Dungen, meeting with Robert Scoble during his research for the book which provides a comprehensive overview of the preeminent contextual technologies, what they enable today, and how they’re likely to change the future (the most pertinent part is at 28:36). For us, there are two chapters that especially stand out.

Chapter 8. Why Wearables Matter

In Chapter 8, the authors present a variety of wearable identification technologies including the Aspen Snowmass RF Card, Disney Magic Bands and Nagra ID Smart Cards as well as wearable displays such as Google Glass and Oakley Airwave. They envisage a “contextual, wearable system that knows your location, your current activity, your preferences as well as what you are looking at in real-time” which they follow with the statement:

While such an omnibus system does not exist at this moment, every component of it is currently available, it would not take much to stitch it all together.

What Robert Scoble and Shel Israel are describing is essentially our Log in to Life experience adapted for wearable screens. For instance, at the International Startup Festival, we stitched Hexoskin and reelyActive technologies together to create a location-based activity display including all nearby identifiable people. And since it was consumable as a webpage, it could have been viewed on Google Glass!

Hexoskin, reelyActive and Log in to Life

For us, wearable identification technologies are the cornerstone to contextual awareness. When everyone and everything in proximity can be uniquely identified by their wearables (including smart phones), and each is associated with its corresponding digital footprint and/or personal preferences, you have what we call hyperlocal context: a digital representation of the people and things at a point of interest. Ubiquitous contextual awareness becomes a reality when all concerned parties can consume hyperlocal context via an API. In October, we shared our vision with the scientific community, presenting our paper Hyperlocal Context to facilitate an Internet of Things Understanding of the World at the IIKI 2013 conference in Beijing, China.

Hyperlocal context is the mechanism which stitches together location, identities and all associated metadata. While the principle is elegantly simple, the challenge lies in dealing with a wide variety of device vendors and identification technologies. To overcome this challenge, we’ve developed a multi-standard, vendor-agnostic hardware infrastructure that can identify and locate wireless devices in order to provide hyperlocal context. Of course this presents the challenge of new infrastructure requirements, as we describe in our previous blog post Rails and Reels: History, Infrastructure and the IoT, however, in our opinion, it is the most effective and efficient means to usher in an age of ubiquitous contextual awareness.

Chapter 12. Why Trust is the New Currency

In the final chapter of the book, Robert Scoble and Shel Israel summarize their thoughts on the incentives and trade-offs of contextual experiences. One might say nothing ventured, nothing gained:

We think the benefits we gain from contextual technologies are worth the cost of the loss of some of our personal information.

We fully agree. They proceed to argue that “people should be allowed to opt-out whenever they find the privacy costs are just too high for their personal tastes.” Of course, the ability to opt-out is unquestionable, however we emphasize that an opt-in strategy is key to gaining public trust and acceptance. Over and over, we’ve seen the public distaste for opt-out WiFi tracking and the knee-jerk reaction to identification and location technologies (see Big Brother and the Identity of Things), and it does not bode well for the future. What contextual technologies need are opt-in experiences so clearly rewarding that people, fully aware of any privacy they are surrendering, not only participate in droves, but further encourage their friends to join in!

Of course, in order to be completely aware of the privacy you are surrendering, there must be a means to understand what you are sharing and with whom at any given moment. Robert Scoble and Shel Israel eloquently state that:

It seems self-evident that we should own our own data and that any third-party should need our permission to use it.

While this may seem completely obvious, think about how your data is collected and shared today. How much time and energy would it take to sift through every Terms of Service agreement and attempt to identify who has access to what information? In our scientific paper, we argue that initiatives such as personal data lockers (for example The Locker Project) are a viable mechanism for managing what you share and with whom. While such initiatives are technically feasible today, the challenge lies in disrupting the current practice of each third-party maintaining their own (often stale) copy of your personal data.

While the gap between the status quo and a world where everyone manages access to their personal data is huge, there is reason to believe that this can, and will, change. Consider the following statement:

We believe the most trustworthy companies will thrive in the Age of Context, and those found to be short on supply of candor will end up short on customers. Transparency and trustworthiness will be the differentiating factors by which customers will make an increasing number of choices.

There is an opportunity today to differentiate and to offer discriminating customers exactly they’re looking for. For well over a year, we’ve been learning from the feedback on our Log in to Life experience, we’ve speculated on what the most intrepid people might (or might not) share in contextual environments, and we remain excited about the potential for a fully connected world to evolve into a superorganism. Everything we’ve learned has taught us to think, and act, differently. As Robert Scoble and Shel Israel conclude:

Openness and transparency create a significant opportunity for every startup that has giant-killing already etched into its organizational DNA. If we are right, then the Age of Context will give us an open new world.

We could not agree more, and we’re working hard to prove them right!

Rails and Reels: History, Infrastructure and the IoT

Hudson River from Amtrak Adirondack

In November 2013, after an eventful business trip to San Francisco and New York City, I (Jeffrey Dungen, CEO of reelyActive) decided to take the train home to Montreal. Of course a flight or even a bus would have been faster, but I’ve always been fascinated by trains and was excited to have an excuse to take one of the few international passenger rail routes in North America. The Amtrak Adirondack follows a beautiful trace along the Hudson River and along the banks of Lake Champlain. And the 11+ hour journey gives one plenty of time for reflection while enjoying the scenery.

When you think about it, much of the track on the route dates back as far as the 1850s. And while the rails themselves have been upgraded from the originals, the standard gauge is unchanged. It is almost unfathomable, in a time where consumer electronics become obsolete in years if not months, that a technology infrastructure, such as railroads, could remain relevant for centuries!

It took true pioneers to build the first railroads. Imagine convincing someone who had never seen a train of the value of this new transportation infrastructure. “Railroads would better connect towns, cities and resources” you might say. “But those places are already connected by waterways and primitive roads” they might say. The pioneers, however, weren’t thinking about what railroads were (an accessible and modular means to move freight and passengers over land), they were thinking about what railroads enabled, for instance, the potential to colonize the United States and Canada.

The train ride brought familiarity to that scenario. At reelyActive, we’re proposing infrastructure to better connect people, places and things. “But don’t smartphones already do that?” say many critics. However, just like the railroad pioneers, we are less concerned about what that infrastructure is (an accessible and modular means to move data packets among massive numbers of devices), and rather focused on what that infrastructure enables: the Internet of Things.

And, speaking of the Internet of Things, it has indeed been a very interesting year. The tech blogs hype the concept, and there’s excitement over talk of tens of billions of devices coming online in the next few years. Surely it’s not unlike the buzz preceding the colonization of the American interior in the 19th century. But how will it all unfold in the case of the IoT? Will major players like Cisco, GE and now Bosch become the Vanderbilts, Goulds and Hills?

In keeping with the railroad analogy, if the intent is to create a pervasive Internet of Things, there will indeed need to be a standard gauge just like the one that allows the Adirondack to operate on tracks managed by 5 different owners in two countries. But that alone is insufficient. The 11 hours it takes the Adirondack to cover the 613km route is not limited by the locomotive’s speed, but rather by 5 different track owners all acting in their own interest. A similar arrangement does not bode well for a pervasive Internet of Things!

In summary, the long, slow but scenic trip on the Adirondack was an excellent reminder that infrastructure built on standards can provide continuous and extensive value, far beyond what can be imagined at conception. However, the inconveniences of the journey serve as a reminder that infrastructure must serve the interests of its users, not just its owners. History’s lessons can serve to ensure that the Things of the future enjoy a smooth ride.

Facebook, you might “like” this

Facebook Live Directory

As finalists of Startup World, we were invited to a tour of the Facebook campus in Menlo Park. At the first stop in the tour, we were greeted by a flatscreen displaying a map and directory (see above). Immediately our eyes grew wide: had Facebook developed a live, interactive directory of everyone present in the building?

Hyla Wallis, our tour guide and veteran Facebooker, explained that these flatscreens, located throughout the campus, indicate the familiar “you are here” dot, list where everyone sits and display which meeting rooms are available and booked (green/red). All but the latter are based on static information. This is where our technology could shine.

We’ve been operating a live directory with flatscreens at Notman House in Montreal for well over a year now, having started as an experiment in our FounderFuel cohort. You can check it out right now, it’s simply a responsive webpage. Our cloud knows who/what is present because they carry a little keyfob device that is uniquely identifiable by sensors we installed in the building (see our technology).

How cool would it be to have this installed campus-wide! What amazing features could Facebook build with real-time knowledge of the presence of everyone and everything at points of interest?

  • Live locator: take the existing Facebook flatscreen application and add real-time information about any person you might look up. Are they at their desk, in a meeting, playing ping-pong, not even on campus?
  • Meeting notifications: did the meeting room you’ve booked just free up? Did the first person scheduled for that meeting already arrive? Last-minute room change? Could be a friendly reminder in the ticker.
  • Friend suggestions: did you spend time with someone new in a meeting, at lunch, at a hackathon? We know! How about you become friends on Facebook?
  • Auto login: did you just arrive on campus or at your desk ready for work? Let the system log you in automatically so that your colleagues know. Leaving campus for home? Log out automatically.
  • The physical Like button: do you like one of the restaurants on campus, or one of their menu items? Like what another department is working on? Enjoy the satisfaction of pressing a real, big Like button. Your presence associates this with your account.

Of course, we’d expect the creative minds at Facebook to dream up uses far more awesome than these. But the point is that all of this is possible today: we would merely provide the tools to identify the presence of people throughout the campus at points of interest so that their in-house team could build the rest using our API. Since there are already network cables running everywhere (see photo below) installing our sensor reels would be a plug-n-play breeze. We’d love to create a custom keyfob design in the shape of the iconic “f” for everyone to carry with their badge. And of course, there’s the potential to add BLE beacon functionality to the Facebook app so that anyone can opt-in and participate using the latest smartphones.

A corporate campus is a hotbed of opportunity for our technology. The Facebook corporate campus takes this to a whole new level.

Facebook Installation

iBeacon: a lighthouse in your pocket?

iLighthouse

In our last blog post, Beyond the Beacon: BLE Just Got Reel, we demonstrated an iOS7 device advertising its presence to our hardware infrastructure using Bluetooth Low Energy technology. For many reasons, as we will outline, we were very pleased to validate this functionality, which we were pleasantly surprised that Apple allowed. We spoke to Stacey Higginbotham about the possibilities of this beacon role-reversal which she summarized in her article Loophole in iBeacon could let iPhones guard your likes instead of bombard you with coupons in GigaOM.

Over a year in the making

We kicked off reelyActive at the start of 2012 with a vision to create a simple, accessible active RFID platform suitable for almost any application. Our previous experience in the field taught us that almost all of the applications we encountered boiled down to knowing the unique identity and location of the people and things in a space. This can be achieved by having the wireless devices in that space actively identify themselves to fixed infrastructure. Since the outset, we’ve been following the adoption of BLE since it enables active RFID for everything from inexpensive, low-power sensors to smart devices and beyond.

In the Fall of 2012 we participated in the FounderFuel startup accelerator where we were constantly asked “when will smartphones be compatible with your platform”. Due diligence revealed that the problems were manifold. First, even though BLE hardware was shipping in just about everything since the iPhone 4S and Galaxy S3, the operating systems of these devices provided no facility to leverage the radio for active RFID. Second, the BLE system-on-a-chip we required for our own hardware devices was only available in the form of engineering samples, delaying the development of our prototypes.

While Android 4.3 added BLE support, it did not support the peripheral role which allows the device to initiate the transmission of radio packets. However, iOS7 did support this role and introduced the iBeacon…

Enter the iBeacon

The iBeacon is analogous to a lighthouse: it represents a known location which can be uniquely identified by its signal. When iBeacons are installed in a space, they can be detected by smartphones to assist with indoor navigation or to trigger actions. The smartphone listens for iBeacons similar to how it listens for WiFi access points and GPS satellites to determine its location (see our blog post: Radio location demystified). Since GPS is poorly received by smartphones indoors, and WiFi was never designed for indoor location, beacons are a nifty way of improving the accuracy of location-aware devices like the smartphone. In the presence of beacons, the smartphone can determine its location with confidence and ease.

The interesting thing about iOS7 is that it allows the smartphone to act as an iBeacon. This means that the device can advertise its unique identity to its surroundings like a lighthouse. Of course, any BLE peripheral can advertise the services it offers, but the iBeacon is special as we will now explain.

Here’s the technical part

An iBeacon packet contains the following information:

  • 48-bit random device address
  • 128-bit UUID based on RFC 4122
  • 16-bit major identifier
  • 16-bit minor identifier

Based on our tests, the 48-bit random device address changes every 15 minutes, and there does not appear to be a means for an application to query this address. The other fields listed above are user defined. This means that an app could specify a 128-bit UUID which would be uniquely associated with the device indefinitely. Specifically, this allows the device to act as a glorified active RFID tag and requires only a few lines of code.

When run in the foreground, the app can allow the user to opt-in and enable the iBeacon with a tap. Based on our tests, this results in over 30 radio transmissions per second (per channel). However, when iBeacon is run in the background, there are two important changes. First, everything but the 48-bit random device address is dropped from the packet, and, second, the rate drops to about 5 transmissions per second (per channel). Even with the screen locked, we have found that the iBeacon continues to transmit packets indefinitely.

What does this mean?

It’s possible to create an app that allows compatible iOS7 devices to advertise their presence to their surroundings, and to be uniquely identified. The user is in complete control to opt-in or disable this feature in real-time. In the presence of BLE infrastructure like our reelceivers, the space itself can not only become aware of all the devices it contains, but also estimate their location. In other words, contextual awareness can be aggregated by the environment itself. We will present a scientific paper on this topic and the implications at the International Workshop on Identification, Information and Knowledge in the Internet of Things in Beijing in October.

If the app runs in the background, the device can still be identified and located throughout the space, generating potentially useful anonymous data. However it cannot be associated with a UUID (and hence user profile) unless the user brings the app into the foreground. The periodic random device address changes cause discontinuities which may nonetheless be stitched by a clever algorithm under favourable conditions.

How is this better than the standard iBeacons use case?

It isn’t. Both use cases have their merits. What we’ve presented concentrates contextual awareness with the space itself, rather than in the smart device. This favours interactive applications where the environment reacts to the presence and movements of the people and things it contains. It also works seamlessly even when the device loses internet connectivity, and it is battery-friendly. Periodically transmitting packets consumes a fraction of the energy required to listen for iBeacons, WiFi access points and GPS satellites, and offloads the computation of location to the fixed infrastructure.

Thinking ahead to the tens of billions of devices expected to comprise the Internet of Things, a non-negligible fraction of which may use BLE, there’s nonetheless a strong argument for installing infrastructure that provides both ambient network connectivity and beacon capability.

A quick conclusion

In a perfect world, there would be a facility for any BLE device to transmit BLE advertising packets with a UUID or accessible random device address at an appropriate rate (5-30Hz is overkill: for many applications one transmission per second is sufficient). Nonetheless, we’re pleased that Apple has enabled (whether intentionally or not) the active RFID use case we present despite the constraints we’ve identified. With any luck, soon all mobile operating systems will provide the necessary functionality for the world to enjoy interactive, contextually-aware spaces through experiences like Log in to Life.