Ask your digital transformation provider…

Would you choose to receive dental treatment from a dentist with bad teeth? Fitness coaching from a coach that is unfit? Change management consulting from an organisation resistant to change?

At the First International Innovation 4.0 Forum this week, we took in multiple vendor presentations about the digital transformation of entire industries, including the services those vendors offer their clients, which, of course, begged the question:

how far along are those vendors themselves in their own digital transformation?

For instance, the conference was opened by Eric Schaeffer, leader of Accenture’s Digital Industry X.0 program, who argued confidently that “business as usual is not an option anymore”, a statement we agree applies to companies across industries. He went on to argue that transformation is a continuous process rather than simply a discrete step:

“Like the tide, it keeps rolling. You have to live with it. That’s the bottom line.”

It would therefore be interesting to understand to what extent Accenture—and their peers—themselves have, internally, embraced a culture of continuous change and ongoing transformation. We wonder because we ourselves at reelyActive, despite having a small and highly motivated team who believe we should practice what we preach, nonetheless find this to be a challenging and demanding endeavour.

Earlier this year we asked Are we selling discomfort? The answer is yes: data from our platform indeed highlights that business as usual is no longer an option for our clients—far from a comfortable proposition. Said differently, vendors selling comfort are selling something other than transformation. Those leading the business of transformation are effectively selling discomfort, both externally and internally. And how many companies are actually comfortable with that?

So, for those advancing toward Industry 4.0, or X.0, or [insert buzzword here], we offer as advice to ask your digital transformation provider how they themselves realised their own digital transformation. There is, quite literally, much to be learned from their answer.

Who owns the future?

Occasionally a book will provide the opportunity for the reader to travel back in time and delve into the mind of an author that was thinking far ahead, at least as far as the reader’s present. What a pleasure to now read Jaron Lanier‘s Who Owns the Future? which was published in the months following reelyActive’s incorporation. Then was a time when we were thinking about humans’ place in a world of ubiquitous radio-identification and real-time location. Lanier was thinking about humans’ place in the evolving information economy. In 2015 we took a critical look back on our first ever pitch. In this blog post we’ll take a critical look back on our history in light of Lanier’s predictions and arguments, starting with the central premise of his book:

The foundational idea of humanistic computing is that provenance is valuable. Information is people in disguise, and people ought to be paid for the value they contribute that can be sent or stored on a digital network.

We’re off to a good start as this is directly in line with our long-established mission: to unlock the value of the data you choose to share. Nonetheless, our mission has often been a hard sell in the absence of a marketplace in which such contributions can actually be monetised. However we’re not alone in envisaging this as we discussed last year in Micro-transactions with macro-implications. And the notion that Data is Human is not without precedent either.

But how will the provenance of people’s contributions be captured and recorded?

Everyone will need to have a unique commercial identity in a universal public market information system. That contrasts with the way things work currently, where machines have unique identities, like IP addresses, but people don’t.

In 2013, the year the book was published, we were hard at work on the premise for people to have unique identities, not unlike IP addresses. Before Apple officially launched iBeacon, we demonstrated how the technology could be used in reverse to serve this exact purpose. A few months later, in a presentation entitled Advertise Yourself, we pitched this concept to industry leaders, including to those of tech’s Big Four, at Bluetooth World in Silicon Valley.

But who will administrate these unique personal identities?

This is one of those cases where you have to choose the least of evils. You might not like the idea of a universal online identity, but face it, if you don’t allow one to come about in the context of government, it will happen anyway through companies like Google and Facebook. You might like and trust these companies now more than you like or trust the government, but you should see what happens to tech companies as they age.

In 2017 we demonstrated how Google could use their Physical Web as a means for users to opt-in to a physical-digital identity, as well as how Facebook could leverage their mobile app to a similar, although likely less transparent, end. A year later, Google had killed off their Physical Web and Facebook was mired in the Cambridge Analytica scandal. You should see what happens to tech companies when they age indeed!

So when might we expect this all to turn around?

Another basic function of the design of power must be to facilitate long-term thinking. Is it possible to invest in something that will pay off in thirty years or a hundred, or is everything about the next quarter, or even the next quarter of a millisecond?

That’s a question we asked that same year in our post on creating the next computing industry. As Alan Kay, veteran of Xerox PARC (which did think long-term) and creator of the Dynabook (which inspired the iPad), wrote to us: “I’ve never heard of VCs being interested in time frames like that.”

So there’s no chance for a startup to bring to life this vision of the future?

[A] startup-driven scenario is not absolutely impossible. A new startup could conceivably gain more clout that Facebook, and then stay true to its original intent, goading a critical mass of other, older Siren Servers into a new, humanistic phase of activity.

A—funded—startup staying true to its original intent is easier said than done, which is perhaps why Lanier seems not overly optimistic about this scenario. However, there are emerging alternatives as we discuss in Purpose, commitment and accountability, specifically around rethinking ownership. It would be fair to say that we’ve embraced the arduous task of actually testing whether a startup-driven scenario is in fact not absolutely impossible.

Should we press on? How do we stay true?

Please do that, but please also stop once per hour and check yourself:   Are you still keeping people in the center?   Is it still all about the people?   Are you really avoiding the lazy trapdoor of falling back into thinking of people as components and a central server as being the only point of view for defining efficiency or testing efficacy?

Taking comfort in our alignment with Lanier’s hypothesis and predictions in this critical look back at our past seven years, we shall indeed press on, continuing to keep people in the centre of our vision for the future.   Who owns the future?   Perhaps reassuringly, there is still no consensus on the answer to that question!

The catallaxy of Industry 4.0

Consider the following prediction by Matt Ridley, in his 2010 book The Rational Optimist:

I forecast that the twenty-first century will show a continuing expansion of catallaxy — Hayek’s word for spontaneous order created by exchange and specialisation. Intelligence will become more and more collective; innovation and order will become more and more bottom-up; work will become more and more specialised, leisure more and more diversified.

Catallaxy is an alternative to the term economy, emphasising the emergent properties of exchange not among actors with common goals and values, but rather among actors with diverse and disparate goals, a concept central to the Austrian School of which Friedrich Hayek was a renowned contributor.

Could one argue that Ridley’s prediction, and Hayek’s definition of catallaxy, represent a central tenet of Industry 4.0? In other words,

should we be discussing Industry 4.0 catallactics rather than economics?

While those questions are far too broad to address in a blog post, we can nonetheless address a single concept that we at reelyActive believe to be core to Industry 4.0: ubiquitous real-time location. If computers are able to understand who/what is where/how in any physical space, they can elevate efficiencies by magnitudes rivalled only by previous industrial revolutions. For instance, Jeremy Rifkin argues for a trebling of aggregate energy efficiency from 13% to 40% or more.

But alas, applying ubiquitous real-time location to the benefit of “actors with common goals and values” too easily equates with Big Brother, or surveillance by a state or entity, typically under the auspices of the “greater good”. In short, top-down organisation of ubiquitous real-time location is likely to restrict the free exchange of real-time location data, and hence the efficiencies of Industry 4.0, unless all actors’ interests are aligned.

And it is not difficult to argue that the actors—individuals, entities and their assets—are not aligned and do indeed have “diverse and disparate goals”. For instance, when you walk into a store, you may be motivated to find what you’re looking for as quickly as possible while a salesperson may be motivated to upsell you, while a competitor may be motivated to promote their product or get you into their store. If a delightful retail experience is to spontaneously emerge, it will be the result of the voluntary exchange of location data by the “motivations and actions of individuals”: the basis of the Austrian School. In short, bottom-up organisation of ubiquitous real-time location arguably affords better outcomes to actors with competing motivations, especially when those actors and their interests vary so greatly across countless industries and contexts!

Would it be possible to imagine an emergent Pervasive Sharing Economy as anything but bottom-up?

Perhaps the closest thing to ubiquitous location we have today is through mobile, hence the retail example above. In Still Place for optimism? we illustrated how industry relies on top-down “baseline location data” where, often to the detriment of the located mobile user, that user’s competing interests are ignored rather than embraced. Couple this with state surveillance by means of mobile location data, and it becomes very difficult to make the case for a top-down approach as the catalyst for the widespread exchange of real-time location data necessary to deliver the promised efficiencies of Industry 4.0.

So, returning to Ridley’s prediction, the case for expansion of the catallaxy and bottom-up order is therefore strong, provided we can break from the long-standing top-down traditions of Industry 3.0. But isn’t breaking established traditions what revolutions are all about? And if we are to break from tradition, the economy of Industry 4.0 may well be described as a catallaxy. At least in the case of ubiquitous real-time location it’s difficult to imagine otherwise!

Are we selling discomfort?

What’s your product? “It’s a real-time data stream of who/what is where/how in any physical space” anyone at reelyActive might answer today. Okay. But what are you really selling?

Curiously, what we’re really selling today may very well be discomfort.

Really.

Take for instance our client who is measuring attendance and zone occupancy at a festival right now. Will the data comfort the festival and its sponsors by telling the story they want to hear about record numbers?   No.   The data will tell the story of what really happened, inevitably raising uncomfortable questions.

Take for instance our client who uses our data to help brick-and-mortar retailers compete on in-store client experience. Will the customer journey data comfort the retailer about how well they’re doing?   No.   The data will reveal their failures to meet their clients where they are, again, raising uncomfortable questions.

Take for instance our client who uses our data to observe occupancy and space utilisation in their offices. Will the data comfort them by showing that every space is just right and delivering outstanding employee experience?   No.   The data will tell the story of what spaces are under and over-utilised. They can expect an uncomfortable discussion about why they spent X on a space that nobody is using while there’s a shortage of some other type of space.

Why are our clients buying discomfort?

Our clients are buying discomfort because it motivates them to improve. Their competitors may be buying comfort: products and services that make the stakeholders and the status quo look good. And while that may still be an effective short-term strategy, our clients are embracing continuous improvement, and the associated discomfort, as they recognise this as the winning long-term strategy.

A festival that addresses what isn’t driving attendance, delivers a better sponsor and attendee experience the following year. A retailer that meets its customers at the right time and place earns and retains their loyalty. A business that regularly adapts its offices to the needs of its employees enjoys the highest productivity.

So, are we selling discomfort?   Yes, in effect.   But our clients are more than comfortable with that.

Data is Human

“It’s up to smart humans to stop being stupid about AI.” That was the title of what we’d argue was the most impactful talk at Collision 2019 in Toronto. A single slide of three words eloquently summarises the challenge and opportunity of our era: Data is Human.

The presenter, Christian Beedgen, went on to connect this concept with a quote from Immanuel Kant:

“Treat people as an end, and never as a means to an end.”

Combined, the implication is as follows:

When we consider that data is simply data, it is easy, even generally accepted, to treat data as a means to an end.

When we consider that data is human, we are confronted with the ethical dilemma of treating humans as a means to an end.

The challenge of our era is to consider that data is human despite the opportunity of, for instance, the lucrative advertising-revenue models of immensely successful Web 2.0-generation businesses which rely on data as a means to an end.

Imagine if industry did indeed consider data to be human and treated it as such. Would we have had the occasion to write the following blog posts over the course of the past six years?

— We need to move Beyond people-as-a-product [2018].   Why?   Because data is human, and we should treat people as an end and not a means.

— We should take care to treat The IoT as your Brand Ambassador [2016].   Why?   Because data is human, and we should treat people as an end and not a means.

Society can HOPE for a better Link [2016].   Why?   Because data is human, and we should treat people as an end and not a means.

— There’s a strong case for The Bank of Personal Data [2015].   Why?   Because data is human, and we should treat people as an end and not a means.

— There’s a legitimate fear of Big Brother and the Identity of Things [2013].   Why?   Because data is human, and we should treat people as an end and not a means.

Of course, all too often, industry continues to treat data in the manner that is most convenient, not necessarily that which is most appropriate. This is even more concerning in light of the Internet of Things, where humans generate orders of magnitude more data, often unknowingly.

In fact, our cheeky April Fool’s post introduced the concept of the “Digital Conjoined Twin”, arguably the ultimate manifestation of data being human. Will industry practices drive people to a point where they will go so far as to host their own data on their person? Almost certainly not, assuming that companies embrace, in earnest, the concept that data is human.

However, if companies continue to treat user data as a means to an end, the consequence may well be their users finding a means, however extreme, to end the relationship. It’s up to smart humans to avoid that outcome.

RFID Journal Live 2019

Oh the irony of human-entered data at an RFID conference. Ten years ago, Kevin Ashton, who coined the term “Internet of Things”, explained in RFID Journal:

We need to empower computers with their own means of gathering information […] without the limitations of human-entered data.

Case in point, the badge: the surname and given name are reversed, with the latter mispelled misspelled as a result of human data entry during onsite registration from a paper & pencil form. Nonetheless, this is an excellent example for emphasising the potential of RFID and the IoT!

Indeed, at the co-hosted IEEE RFID event, I, Jeffery Jeffrey, presented a workshop entitled Co-located RFID Systems Unite! focused on this potential now that there are nearly 20 billion RAIN (passive) and BLE (active) units shipping annually. An open architecture for collecting, contextualising and distributing the resulting data is becoming critical, and I was pleased to hear this sentiment echoed on the RFID Journal side by Richard Haig of Herman Kay and Joachim Wilkens of C&A.

Also heard echoed was the prevalence of BLE (active RFID) throughout the conference. Literally.

This contraption which converts radio decodings into musical notes may seem odd at first, but over the past year we’ve learned that art is a powerful tool for conveying to a non-technical audience the prevalence and potential of RFID and IoT in our daily lives. A few attendees were invited to listen with headphones and walk around until they found a silent spot. None were successful.

And we can only expect such prevalence to increase with energy harvesting technology maturing. We were pleased to see Wiliot’s live demo of an energy harvesting BLE tag, making good on their objectives from last year’s conference. Inexpensive battery-free BLE will be key to RFID proliferating to all the physical spaces in which we live, work and play—the BLE receiver infrastructure is often already there.

Which came first: the RFID or the Digital Twin?

The concept of the Digital Twin has also taken off over the past year, and we were pleased to have the opportunity to ask Jürgen Hartmann which came first in the Mercedes-Benz car factory example he presented? His answer was clear:

“Without RFID, for us there is no Digital Twin.”

Ironically, our April Fool’s post from two days previous was about Digital Conjoined Twins where we joked that the digital twin resides in the optimal location: adjacent to the physical entity that it represents. Perhaps not so silly in the context of industrial applications highly sensitive to latency???

RFID projects championed by the organisation’s finance department?

That is exactly what Joachim Wilkens of C&A argued. The success of their retail RFID deployment was in direct consequence of the C-level being on board, but more importantly by having a business case championed by the finance department:

“This is not an IT project, this is a business project.”

While we’ve observed our fair share of tech-driven deployments over the past few years, we’re increasingly seeing measurable business outcomes. For instance, a recent workplace occupancy deployment delivered, within months, a 15% savings in real-estate. That is a business project—one the finance department would love to repeat!

IoT: the next generation

What will we discuss in our RFID Journal Live 2029 blog post when the IoT celebrates its third decade?   That may well be in the hands of the next generation.   Since we began attending the co-hosted IEEE RFID and RFID Journal Live in 2013, we’ve observed a slow but steady shift in demographics. A younger generation—one which grew up with the Internet—is succeeding the generation instrumental in the development and commercialisation of RFID. On the showroom floor, we’re talking about the Web and APIs. At the IEEE dinner we’re discussing industry-academia collaboration to teach students about applications and ethics. And in the IEEE workshops, ASU Prof. Katina Michael took the initiative to invite one of her undergraduate students to argue the (highly controversial) case for implantables, effectively ceding centre stage to the next generation.

RFID's next generation is coming of age

The final print copy of RFID Journal we received back in 2012 is entitled “RFID’s Coming of Age”. Today I would argue that RFID’s next generation is coming of age. 1999 saw the emergence of the terms IoT and Web 2.0. Might we expect 2019 to mark the emergence of the term RFID 2.0?

5G and the Digital Conjoined Twin

For years we’ve been hearing about the rollout of 5G, with much speculation about how exactly it will transform our daily lives. As with any new technology, the most disruptive use case often emerges completely unexpected. In the case of 5G, even with limited availability, we’re already observing emergence of the digital conjoined twin.

The concept of the digital twin, a digital replica and history of a living or non-living physical entity, percolated up the list of the top buzzwords of 2018. And as vendors now clamber to market digital twin solutions while enterprises scramble to develop digital twin strategies, some individuals are already taking the concept one step further.

Driven by concerns about data privacy in light of the many recent breaches and scandals, a few tech-savvy individuals have elected to reclaim their digital twin, hosting and managing their own digital-self-replica in the cloud themselves. The most intrepid are going so far as to bypass the cloud entirely, instead hosting their digital twins locally—that is to say on their physical person. In the words of one such individual:

“5G eliminates the only remaining reasons for me to choose the cloud: high bandwidth and low latency. So I cut out the middleman.”

By hosting one’s own digital representation, in this case using an open source Node.js server running on a Raspberry Pi with 5G stick, physically attached to oneself at all times, the result is a digital conjoined twin.

The inconvenience of carrying a device that needs to be recharged once or more per day is offset by the fact that the digital twin resides in the optimal location: adjacent to the physical entity that it represents. Updates to the state of the individual are immediately reflected in their digital conjoined twin, accessible on the Internet with sub-millisecond latency thanks to 5G. In the age of instant updates, it simply does not get more real-time than this!

This is distinct from carrying a smartphone, the sidekick device of the past decade. The mobile ecosystem never embraced the digital conjoined twin paradigm as, arguably, this would challenge their established People-as-a-Product business model. It’s easy to see why:

“With this setup I can literally pull the plug on my data at any time. It’s a USB cable connected to a power pack.”

Indeed, 5G introduces the possibility to live “at the edge” in a quite literal sense. And the scenario becomes even more interesting when two pairs of digital conjoined twins meet.

While each human discovers the other visually and subsequently engages in verbal conversation, their digital twins spontaneously discover one another via Bluetooth Low Energy and subsequently engage in IP-over-5G conversation. The digital twins require no intervention from their human counterparts in order to engage one another. This leaves the individuals free to interact with one another, once again without distraction—just as humans have done for hundreds of thousands of years—save of course for the quirky contraptions conjoined with their bodies.

Today, April 1st, 2019, marks the first documented encounter of pairs of digital conjoined twins. It would be foolish not to think of this as the dawn of a new era for humanity.

The Web turns 30!

The World Wide Web turns 30 years old today, March 12th 2019, and its inventor, Sir Tim Berners-Lee, has asked its citizens to help build a timeline of the web’s history. Given that reelyActive has existed on the Web for about a quarter of its history, we thought it’d be fun to look back at our own Web presence since 2012.

2012: First landing page

Our first landing page showed off our newly minted hardware — purely in monochrome — and provided guided visitors to either a serious or a silly site. In 2012, real-time location of individuals (aka tracking people) was controversial, and the silly site was intended to show visitors that we were indeed looking critically at ourselves and the platform we were developing. Indeed there’s still a sillyActive filter on this blog!

2012: The “serious” site

Yes, this was our non-responsive-design, fully monochrome, “serious” developer-oriented website in 2012. Fixed 960px width because who needed more resolution than that, right? We had just learned PHP, which was applied generously throughout. Credit to Twitter and LinkedIn for the social plugins in the footer that are still functional at the time of writing!

  ↪ Journey back in time to www.reelyactive.com/serious/

2013-2014: The “corporate” site

Finally colour! And JavaScript animations! Curiously enough, our first major experience with JavaScript was server-side with Node.js, which then gave us the confidence to apply it client-side, in this case with the TweenMax JS framework. Despite its non-responsive 960px formula, this website received a lot of appreciation for the Technology and Applications animations which were refreshingly accessible to a broad (non-native-English-speaking) audience.

  ↪ Journey back in time to www.reelyactive.com/corporate/

2014-2016: The “context” site

By 2014 we had finally gone responsive, abandoning ground-up web design for the lovely Creative Commons frameworks from HTML5 UP that we continue to use to this day. There was even a full-screen explainer video and fresh graphics from local designers.

  ↪ Journey back in time to context.reelyactive.com

2016-present: Our current landing page

Our current landing page has benefited from many iterations, and the most powerful changes are the ones behind the scenes. Many of the pages include linked data in the form of JSON-LD, which became a Web standard in 2014. This is what allows Google and other search engines to extract rich content, using Schema.org as a vocabulary. All of our IoT initiatives, such as Sniffypedia.org, rely heavily on these latest standards which are extending the reach of the Web to the physical world.

  ↪ Visit our webpage at www.reelyactive.com

Thank you Sir Tim Berners-Lee for your enormous contribution to humanity in the form of the Web, and to everyone who has contributed to its evolution and maintenance these past three decades. We were pleased to give a friendly nod to your contributions in our most recent scientific publication, and we hope that Mrs. Barnowl will have the chance to meet you and thank you in person, as she did Vint Cerf for his contribution to the Internet!

Local Search in 2019

What’s in store (pun intended) for 2019? The Local Search Association kicks off each year with a compilation of its members’ predictions.

This is our third consecutive year submitting predictions. While our 2017 prediction proved way too optimistic, our 2018 prediction, and the two we submitted this year, resonate well among those of our peers.

Google My Business

GMB, as insiders call it, is easily the central theme of this year’s predictions. As Google describes it, your Business Profile appears right when people are searching for your business or businesses like yours on Google Search and Maps. Said differently, GMB democratises the digitisation of the physical storefront. Because Google observes web standards such as Schema.org, GMB is in line with A structured, data-driven future predicted by MONO‘s Louise Lachmann (p. 47) and Svenn Anderson (p. 56). We couldn’t be more pleased about this development as it finally addresses (pun again intended), at scale, the question we posed in our 2018 prediction:

Might brick-and-mortar benefit by borrowing now familiar concepts from their online counterparts?

In fact, Marc Poirier of Acquisio, a business local to us in the Montréal area, specifies exactly how Google breaks down [the] barrier to entry for local advertisers (p. 34) to make our previous prediction a reality.

Alternatives to the mobile-social status quo

One of our two predictions this year was The emergence of refreshing alternatives to the mobile-social status quo, the paradigm which has largely characterised the current decade. Perhaps the best indication that change is afoot is the dearth of predictions about social networks! The Social & Reviews section is a mere four pages long, with one prediction being our own, and another, by Surefire Local‘s Shashi Bellamkonda, being that Google will finally get social right (p. 26).

Indeed if there’s one prediction that has us excited it is the growth of No click search results predicted by Infront Webworks‘ Michael Hodgson (p. 15). Thanks to the aforementioned structured data in the form of Schema.org, anyone — not just Google — can present information in a novel way to users through the web browser. In fact, one of our current top initiatives at reelyActive is in-page search of physical objects within physical spaces, leveraging these same standards.

Privacy, data ownership and consumer inclusion

The other of our two predictions is that Consumer inclusion will become the norm rather than the exception. As many experts cite in their predictions, the subject of ownership of personal data has come to the forefront, as was clearly the case at the LSA’s Place Conference a few months previous. Jeff White of Gravy Analytics predicts that The data privacy storm is coming (p. 66) while Skyhook‘s Kipp Jones suggests that “the rumors of the death of privacy have been greatly exaggerated” (p. 54). While there’s consensus among the experts that change is coming, the nature and impact of that change is, however, very much up for debate.

Of course concerns over data ownership apply equally to SMBs and brands themselves, as Location3‘s Josh Allen highlights in his prediction of Data ownership & brand control (p. 68), concluding:

Own your website. Own your accounts. Own your data.

This mantra is as befitting to the individual user as to the brands and businesses with which they interact. We’d therefore be pleased to see the conversations about data ownership around the board room table translate into more meaningful conversations around the kitchen table.

Summing it all up

In conclusion, we’re partial to Gyi TsakalakisPlus ça change, plus c’est la même chose prediction (p. 19), not just because it is titled in French, but because it reminds us all that technology is only complementary to what has made, and continues to make, the local space great:

While 2019 is sure to bring improvements in the way that machines understand the real world, success in the local space will continue to be built on remarkable service, reputation, and relationships.

Here’s to a successful 2019 for local search, to a broadly-accessible standards-based web, and to serious industry and individual introspection about privacy!

The future of mobile indoor location

Since the advent of the iBeacon five years ago, much effort has been spent on real-time location-based experiences through mobile. If today you were to ask “Should I develop a native mobile app?” to anyone who has invested in such efforts, you may well receive an emphatic NO.   In this blog post, we’ll not only explain why, but also what you’ll likely want to use instead: the Web.

The motivation for this blog post stems from a recent presentation to a museum team who shared with us their frustration about their own real-time location-based app experience based on recent changes to Google’s Android mobile operating system. In short, the visitor experience of their museum app has tanked for Android users. This made us think back to our first ever pitch deck back in 2012:

That was before the iBeacon, and our solution, in the case of a museum, was to provide guests with a badge (equivalent to a beacon) and invite them to experience real-time location-based digital content on the Web, either through their own mobile device or a provided tablet. We helped create exactly this experience at the MuseoMix Montréal hackathon in 2014, which participants loved:

But, alas, 2014 was still a time when mobile apps could do no wrong and the Bluetooth beacon was the saviour of indoor location: our approach didn’t stand a chance. Over the past few years, countless venues and companies have invested heavily in beacon-based native mobile apps. There have been some brilliant successes. But most of the veterans we’ve encountered of late have been underwhelmed and battered at best.

What have Google done to beacon-based location in Android?
  — A frustrated colleague, Sept. 2018

And now this latest incident where significant changes to Android caused panic among many of our colleagues, partners and clients! For many, beacon-based mobile location experiences went from passaBLE to terriBLE. Case in point: the museum. What were Google thinking? In a recent post we speculated whether Google might have “something revolutionary up their sleeve”. Could that something be…   …the Web?

Could Google be thinking web-first about mobile indoor location?

One might not be aware, but Google’s Chrome browser has supported Web Bluetooth for some time. For instance, you can program a Bluetooth Low Energy (BLE) device wirelessly from the browser! Yet surprisingly, Chrome support for the comparatively simple — yet immensely powerful — feature of scanning for all nearby BLE devices has been relegated to the “What’s Next” list for years now.

Is this due to a technical problem?   Doubtful.
Is this due to a business problem?   Likely.   The scan feature would make beacon-based native apps, and all their behind-the-scenes business models, largely irrelevant.

But wait, Google DID just made beacon-based native apps largely irrelevant…

If Google were to suddenly (and unexpectedly!) implement the scan feature in Chrome, by far the world’s most popular mobile browser, what a progressive disruption that would be! Expect a renaissance of web apps on mobile as web developers could easily tailor the JavaScript of existing pages to deliver hyper-localised beacon-based experiences previously reserved for native apps.



If, unfortunately, the scan feature were to remain pending indefinitely (one can check if/when it works here), in most cases today, the most economical and reliable solution nonetheless is that which we initially championed: provide users a $5 beacon-badge for real-time location and retain the mobile device merely as an interface (web or native). $500 rectangle meet $5 rectangle indeed!



And while the astute reader will exclaim “but what about the cost of the Real-Time Location System (RTLS) infrastructure!?!”, we’ve noted of late that a BLE RTLS may well cost considerably less than the development and maintenance of a beacon-based native mobile app! Moreover, a BLE RTLS (like our own) simultaneously supports both the Web Bluetooth and RTLS approaches.

Coming back to Google, our cautious optimism about a potential shift to web-first mobile indoor location stems from the presence of Vint Cerf, their vice-president and Chief Internet Evangelist, who, in 2015, shared the following three-pronged approach to the Internet of Things, which would indeed be consistent with such a strategy:

Regardless of what Google might have up their sleeve, if today you find yourself asking “Should I develop a native app?” to meet a need for mobile indoor location, in most cases the answer is clearly no. You should use the Web. It remains the penultimate “interoperable ecosystem based on open standards”, and no single vendor will be able to do away with it on mobile.