Location and traceability in times of pandemic

Yesterday (March 11th, 2020), the World Health Organisation’s Director General characterised COVID-19 as a pandemic. Today, in Montréal where reelyActive is based, and around the world, many find themselves directly and personally affected by measures intended to prevent the spread of the virus, including business, institution and school closures, as well as travel restrictions and self-quarantine.

In our team’s adjustment to these changes, we are prompted to recall use cases of our technology particularly applicable to the situation in which we, and countless others, find themselves, and which we’ll present in this blog post.

Working remotely but not alone

In 2014, our clients who were developing The Thing System, worked remotely from California and the UK, and devised a clever use of their technology and ours: whenever a team member was present in their home office (as detected by our tech), a light would turn on in their colleague’s home office halfway around the world (enabled by their tech). In this way, each team member was aware, through calm technology, when their colleague was “at work”, so that they could confidently initiate communication at an appropriate moment, whenever required.

Beaming in

In 2016, our clients Event Presence reached out to make their Beam mobile telepresence experience location-aware. Can’t physically attend an event or conference? They offered a means to attend remotely with the ability to move around and interact freely. As “beaming in” to an unfamiliar space can be disorienting, we worked with them to provide real-time location and context to their remote attendees. Working with the Beams, we found them incredibly useful not only for remotely attending events, but also for working remotely on the very deployments we were developing. On many occasions we even found ourselves chatting Beam-to-Beam in the venue, surely to the bewilderment of passers-by, but very much to the benefit of what we were working to achieve!

Tracing person-to-person and person-to-asset interactions

In 2019, our clients at USC deployed the first trials of DirAct, a technology we co-developed, to automatically capture person-to-person and person-to-asset interactions in an active hospital setting. Hospital staff opt-in to wear a Bluetooth Low Energy badge which detects other badges or asset tags in proximity, and which relays this information via our gateway infrastructure temporarily deployed throughout the hospital. Our colleagues at USC collect this information as part of a study to determine workplace stress factors, however it is not difficult to imagine how this same deployment could be used for traceability of interactions between staff and patients, as well as with hand-washing stations, in the context of a contagious disease such as COVID-19.

The measures currently undertaken to curb the spread of COVID-19 remind us of the pertinence of our physical location—and that of others—in our daily lives, especially as these become impeded or restricted.

In these times, it is not difficult to envisage the wide-reaching potential of real-time location technology, as evidenced by the above examples to which we’ve proudly contributed. From wherever you find yourself reading this we trust that you will stay healthy as much as innovative!

Predict not what you can create

We’re all familiar with some variant of the adage:

The best way to predict the future is to create it.

Our first blog post of each of the past three years has been about prediction. This year will be different.

What’s different this year is our move into a new office which will serve as both a showcase and a living lab. We share this new space with GenieLab, a non-profit with which we’ve worked closely over the years. Our artistic collaborations will be among the installations. But most importantly,

we’ll be able to demonstrate what it means to make a physical space searchable like the web.

Our workplace will be the workplace of the future, one which we create! To reinforce that point, we’re calling the office reelyActive Parc. Fitting because it is at the corner of rue Sherbrooke and avenue du Parc in Montréal, but even more so because of the legacy of another PARC: the Palo Alto Research Center.

If you’re not familiar with the story, in the 1970s and early 80s, Xerox PARC incubated many of the technologies we still associate with the modern digital workplace, including:

  • the graphical user interface (GUI)
  • bitmap graphics
  • WYSIWYG text editing
  • the laser printer
  • Ethernet

as well as many more software innovations. Moreover, Alan Kay, a significant contributor to PARC’s innovations is credited with not only saying, but proving that:

The best way to predict the future is to invent it.

In 2017, we were fortunate to have an e-mail exchange with Kay sparking our blog post on creating the next computing industry. We kick off the new decade in our Park Avenue Research Centre with the ambition to do exactly that.

Open hardware: a closed case?

If you’re familiar with our blog posts, you’ll know we never get bored populating them with puns (oof, there they are already). In case it isn’t obvious, the title is playing on the word case: we’ll be talking about enclosures here. Specifically,

why is open hardware so seldom designed to take advantage of an off-the-shelf enclosure?

In other words, why invest the time and energy to develop a hardware board that doesn’t fit in any standard box? For those who are less familiar with hardware development, one of the first steps in designing the physical board (PCB) on which the circuits and components reside is to establish its dimensions, shape and the location of the holes with which it can be mounted into an enclosure. At this point, it would be straightforward to choose a commercial off-the-shelf (COTS) enclosure that closely matches the target dimensions, and adjust these and the locations of the mounting holes in consequence.

Why does the board need to fit in a standard enclosure? Again, for those less familiar with hardware development, developing and producing a custom enclosure typically runs in the tens or hundreds of thousands of dollars. A COTS enclosure might cost $5 and will be available in small quantities. The economics speak for themselves, especially when many projects based on open hardware require only tens or hundreds of completed units.

Case in point (pun intended, of course), the first generation of hardware we built at reelyActive in 2012: both our active RFID tag and reelceiver were designed around off-the-shelf enclosures from OKW Enclosures (MINITEC) and New Age Enclosures (3 inch Dongle) respectively. These were economically sound decisions as much then as they are today with all the units we’ve produced to date. And our customers enjoy the benefit and freedom of customising the enclosures (colour, form factor, etc.) either with our help or directly with the enclosure vendor.

And therein lies the philosophical issue with open hardware that lacks a standard enclosure: if one of the principal objectives of the “open” movement is freedom from lock-in, this practice doesn’t sit well (pun very much intended). One may argue that open hardware offers developers the freedom to adapt the design to a standard enclosure, however this requires specialised skills and iterations that significantly extend the timeline. And, if there are any radios onboard, as is typically the case with IoT designs, any such adaptation may engender costly, risky and time consuming re-certification.

Designing open hardware around COTS enclosures is both economically and philosophically sound.

The motivation for this post stems from our disappointment in a recent survey of hardware platforms (both open and closed) on which to develop our own next-generation hardware. While there are plenty of options available, we were unable to identify a single one that streamlined rapid prototyping out-of-the-box, or “into-the-box” as is surely more apt. So, eight years after developing our first “into-the-box” prototypes it seems this task is still up to us—open hardware is far from a closed case. We promise to make our design available to the community and hope it encourages wider adoption of “into-the-box” best practices.

Ask your digital transformation provider…

Would you choose to receive dental treatment from a dentist with bad teeth? Fitness coaching from a coach that is unfit? Change management consulting from an organisation resistant to change?

At the First International Innovation 4.0 Forum this week, we took in multiple vendor presentations about the digital transformation of entire industries, including the services those vendors offer their clients, which, of course, begged the question:

how far along are those vendors themselves in their own digital transformation?

For instance, the conference was opened by Eric Schaeffer, leader of Accenture’s Digital Industry X.0 program, who argued confidently that “business as usual is not an option anymore”, a statement we agree applies to companies across industries. He went on to argue that transformation is a continuous process rather than simply a discrete step:

“Like the tide, it keeps rolling. You have to live with it. That’s the bottom line.”

It would therefore be interesting to understand to what extent Accenture—and their peers—themselves have, internally, embraced a culture of continuous change and ongoing transformation. We wonder because we ourselves at reelyActive, despite having a small and highly motivated team who believe we should practice what we preach, nonetheless find this to be a challenging and demanding endeavour.

Earlier this year we asked Are we selling discomfort? The answer is yes: data from our platform indeed highlights that business as usual is no longer an option for our clients—far from a comfortable proposition. Said differently, vendors selling comfort are selling something other than transformation. Those leading the business of transformation are effectively selling discomfort, both externally and internally. And how many companies are actually comfortable with that?

So, for those advancing toward Industry 4.0, or X.0, or [insert buzzword here], we offer as advice to ask your digital transformation provider how they themselves realised their own digital transformation. There is, quite literally, much to be learned from their answer.

Who owns the future?

Occasionally a book will provide the opportunity for the reader to travel back in time and delve into the mind of an author that was thinking far ahead, at least as far as the reader’s present. What a pleasure to now read Jaron Lanier‘s Who Owns the Future? which was published in the months following reelyActive’s incorporation. Then was a time when we were thinking about humans’ place in a world of ubiquitous radio-identification and real-time location. Lanier was thinking about humans’ place in the evolving information economy. In 2015 we took a critical look back on our first ever pitch. In this blog post we’ll take a critical look back on our history in light of Lanier’s predictions and arguments, starting with the central premise of his book:

The foundational idea of humanistic computing is that provenance is valuable. Information is people in disguise, and people ought to be paid for the value they contribute that can be sent or stored on a digital network.

We’re off to a good start as this is directly in line with our long-established mission: to unlock the value of the data you choose to share. Nonetheless, our mission has often been a hard sell in the absence of a marketplace in which such contributions can actually be monetised. However we’re not alone in envisaging this as we discussed last year in Micro-transactions with macro-implications. And the notion that Data is Human is not without precedent either.

But how will the provenance of people’s contributions be captured and recorded?

Everyone will need to have a unique commercial identity in a universal public market information system. That contrasts with the way things work currently, where machines have unique identities, like IP addresses, but people don’t.

In 2013, the year the book was published, we were hard at work on the premise for people to have unique identities, not unlike IP addresses. Before Apple officially launched iBeacon, we demonstrated how the technology could be used in reverse to serve this exact purpose. A few months later, in a presentation entitled Advertise Yourself, we pitched this concept to industry leaders, including to those of tech’s Big Four, at Bluetooth World in Silicon Valley.

But who will administrate these unique personal identities?

This is one of those cases where you have to choose the least of evils. You might not like the idea of a universal online identity, but face it, if you don’t allow one to come about in the context of government, it will happen anyway through companies like Google and Facebook. You might like and trust these companies now more than you like or trust the government, but you should see what happens to tech companies as they age.

In 2017 we demonstrated how Google could use their Physical Web as a means for users to opt-in to a physical-digital identity, as well as how Facebook could leverage their mobile app to a similar, although likely less transparent, end. A year later, Google had killed off their Physical Web and Facebook was mired in the Cambridge Analytica scandal. You should see what happens to tech companies when they age indeed!

So when might we expect this all to turn around?

Another basic function of the design of power must be to facilitate long-term thinking. Is it possible to invest in something that will pay off in thirty years or a hundred, or is everything about the next quarter, or even the next quarter of a millisecond?

That’s a question we asked that same year in our post on creating the next computing industry. As Alan Kay, veteran of Xerox PARC (which did think long-term) and creator of the Dynabook (which inspired the iPad), wrote to us: “I’ve never heard of VCs being interested in time frames like that.”

So there’s no chance for a startup to bring to life this vision of the future?

[A] startup-driven scenario is not absolutely impossible. A new startup could conceivably gain more clout that Facebook, and then stay true to its original intent, goading a critical mass of other, older Siren Servers into a new, humanistic phase of activity.

A—funded—startup staying true to its original intent is easier said than done, which is perhaps why Lanier seems not overly optimistic about this scenario. However, there are emerging alternatives as we discuss in Purpose, commitment and accountability, specifically around rethinking ownership. It would be fair to say that we’ve embraced the arduous task of actually testing whether a startup-driven scenario is in fact not absolutely impossible.

Should we press on? How do we stay true?

Please do that, but please also stop once per hour and check yourself:   Are you still keeping people in the center?   Is it still all about the people?   Are you really avoiding the lazy trapdoor of falling back into thinking of people as components and a central server as being the only point of view for defining efficiency or testing efficacy?

Taking comfort in our alignment with Lanier’s hypothesis and predictions in this critical look back at our past seven years, we shall indeed press on, continuing to keep people in the centre of our vision for the future.   Who owns the future?   Perhaps reassuringly, there is still no consensus on the answer to that question!

The catallaxy of Industry 4.0

Consider the following prediction by Matt Ridley, in his 2010 book The Rational Optimist:

I forecast that the twenty-first century will show a continuing expansion of catallaxy — Hayek’s word for spontaneous order created by exchange and specialisation. Intelligence will become more and more collective; innovation and order will become more and more bottom-up; work will become more and more specialised, leisure more and more diversified.

Catallaxy is an alternative to the term economy, emphasising the emergent properties of exchange not among actors with common goals and values, but rather among actors with diverse and disparate goals, a concept central to the Austrian School of which Friedrich Hayek was a renowned contributor.

Could one argue that Ridley’s prediction, and Hayek’s definition of catallaxy, represent a central tenet of Industry 4.0? In other words,

should we be discussing Industry 4.0 catallactics rather than economics?

While those questions are far too broad to address in a blog post, we can nonetheless address a single concept that we at reelyActive believe to be core to Industry 4.0: ubiquitous real-time location. If computers are able to understand who/what is where/how in any physical space, they can elevate efficiencies by magnitudes rivalled only by previous industrial revolutions. For instance, Jeremy Rifkin argues for a trebling of aggregate energy efficiency from 13% to 40% or more.

But alas, applying ubiquitous real-time location to the benefit of “actors with common goals and values” too easily equates with Big Brother, or surveillance by a state or entity, typically under the auspices of the “greater good”. In short, top-down organisation of ubiquitous real-time location is likely to restrict the free exchange of real-time location data, and hence the efficiencies of Industry 4.0, unless all actors’ interests are aligned.

And it is not difficult to argue that the actors—individuals, entities and their assets—are not aligned and do indeed have “diverse and disparate goals”. For instance, when you walk into a store, you may be motivated to find what you’re looking for as quickly as possible while a salesperson may be motivated to upsell you, while a competitor may be motivated to promote their product or get you into their store. If a delightful retail experience is to spontaneously emerge, it will be the result of the voluntary exchange of location data by the “motivations and actions of individuals”: the basis of the Austrian School. In short, bottom-up organisation of ubiquitous real-time location arguably affords better outcomes to actors with competing motivations, especially when those actors and their interests vary so greatly across countless industries and contexts!

Would it be possible to imagine an emergent Pervasive Sharing Economy as anything but bottom-up?

Perhaps the closest thing to ubiquitous location we have today is through mobile, hence the retail example above. In Still Place for optimism? we illustrated how industry relies on top-down “baseline location data” where, often to the detriment of the located mobile user, that user’s competing interests are ignored rather than embraced. Couple this with state surveillance by means of mobile location data, and it becomes very difficult to make the case for a top-down approach as the catalyst for the widespread exchange of real-time location data necessary to deliver the promised efficiencies of Industry 4.0.

So, returning to Ridley’s prediction, the case for expansion of the catallaxy and bottom-up order is therefore strong, provided we can break from the long-standing top-down traditions of Industry 3.0. But isn’t breaking established traditions what revolutions are all about? And if we are to break from tradition, the economy of Industry 4.0 may well be described as a catallaxy. At least in the case of ubiquitous real-time location it’s difficult to imagine otherwise!

Are we selling discomfort?

What’s your product? “It’s a real-time data stream of who/what is where/how in any physical space” anyone at reelyActive might answer today. Okay. But what are you really selling?

Curiously, what we’re really selling today may very well be discomfort.

Really.

Take for instance our client who is measuring attendance and zone occupancy at a festival right now. Will the data comfort the festival and its sponsors by telling the story they want to hear about record numbers?   No.   The data will tell the story of what really happened, inevitably raising uncomfortable questions.

Take for instance our client who uses our data to help brick-and-mortar retailers compete on in-store client experience. Will the customer journey data comfort the retailer about how well they’re doing?   No.   The data will reveal their failures to meet their clients where they are, again, raising uncomfortable questions.

Take for instance our client who uses our data to observe occupancy and space utilisation in their offices. Will the data comfort them by showing that every space is just right and delivering outstanding employee experience?   No.   The data will tell the story of what spaces are under and over-utilised. They can expect an uncomfortable discussion about why they spent X on a space that nobody is using while there’s a shortage of some other type of space.

Why are our clients buying discomfort?

Our clients are buying discomfort because it motivates them to improve. Their competitors may be buying comfort: products and services that make the stakeholders and the status quo look good. And while that may still be an effective short-term strategy, our clients are embracing continuous improvement, and the associated discomfort, as they recognise this as the winning long-term strategy.

A festival that addresses what isn’t driving attendance, delivers a better sponsor and attendee experience the following year. A retailer that meets its customers at the right time and place earns and retains their loyalty. A business that regularly adapts its offices to the needs of its employees enjoys the highest productivity.

So, are we selling discomfort?   Yes, in effect.   But our clients are more than comfortable with that.

Data is Human

“It’s up to smart humans to stop being stupid about AI.” That was the title of what we’d argue was the most impactful talk at Collision 2019 in Toronto. A single slide of three words eloquently summarises the challenge and opportunity of our era: Data is Human.

The presenter, Christian Beedgen, went on to connect this concept with a quote from Immanuel Kant:

“Treat people as an end, and never as a means to an end.”

Combined, the implication is as follows:

When we consider that data is simply data, it is easy, even generally accepted, to treat data as a means to an end.

When we consider that data is human, we are confronted with the ethical dilemma of treating humans as a means to an end.

The challenge of our era is to consider that data is human despite the opportunity of, for instance, the lucrative advertising-revenue models of immensely successful Web 2.0-generation businesses which rely on data as a means to an end.

Imagine if industry did indeed consider data to be human and treated it as such. Would we have had the occasion to write the following blog posts over the course of the past six years?

— We need to move Beyond people-as-a-product [2018].   Why?   Because data is human, and we should treat people as an end and not a means.

— We should take care to treat The IoT as your Brand Ambassador [2016].   Why?   Because data is human, and we should treat people as an end and not a means.

Society can HOPE for a better Link [2016].   Why?   Because data is human, and we should treat people as an end and not a means.

— There’s a strong case for The Bank of Personal Data [2015].   Why?   Because data is human, and we should treat people as an end and not a means.

— There’s a legitimate fear of Big Brother and the Identity of Things [2013].   Why?   Because data is human, and we should treat people as an end and not a means.

Of course, all too often, industry continues to treat data in the manner that is most convenient, not necessarily that which is most appropriate. This is even more concerning in light of the Internet of Things, where humans generate orders of magnitude more data, often unknowingly.

In fact, our cheeky April Fool’s post introduced the concept of the “Digital Conjoined Twin”, arguably the ultimate manifestation of data being human. Will industry practices drive people to a point where they will go so far as to host their own data on their person? Almost certainly not, assuming that companies embrace, in earnest, the concept that data is human.

However, if companies continue to treat user data as a means to an end, the consequence may well be their users finding a means, however extreme, to end the relationship. It’s up to smart humans to avoid that outcome.

RFID Journal Live 2019

Oh the irony of human-entered data at an RFID conference. Ten years ago, Kevin Ashton, who coined the term “Internet of Things”, explained in RFID Journal:

We need to empower computers with their own means of gathering information […] without the limitations of human-entered data.

Case in point, the badge: the surname and given name are reversed, with the latter mispelled misspelled as a result of human data entry during onsite registration from a paper & pencil form. Nonetheless, this is an excellent example for emphasising the potential of RFID and the IoT!

Indeed, at the co-hosted IEEE RFID event, I, Jeffery Jeffrey, presented a workshop entitled Co-located RFID Systems Unite! focused on this potential now that there are nearly 20 billion RAIN (passive) and BLE (active) units shipping annually. An open architecture for collecting, contextualising and distributing the resulting data is becoming critical, and I was pleased to hear this sentiment echoed on the RFID Journal side by Richard Haig of Herman Kay and Joachim Wilkens of C&A.

Also heard echoed was the prevalence of BLE (active RFID) throughout the conference. Literally.

This contraption which converts radio decodings into musical notes may seem odd at first, but over the past year we’ve learned that art is a powerful tool for conveying to a non-technical audience the prevalence and potential of RFID and IoT in our daily lives. A few attendees were invited to listen with headphones and walk around until they found a silent spot. None were successful.

And we can only expect such prevalence to increase with energy harvesting technology maturing. We were pleased to see Wiliot’s live demo of an energy harvesting BLE tag, making good on their objectives from last year’s conference. Inexpensive battery-free BLE will be key to RFID proliferating to all the physical spaces in which we live, work and play—the BLE receiver infrastructure is often already there.

Which came first: the RFID or the Digital Twin?

The concept of the Digital Twin has also taken off over the past year, and we were pleased to have the opportunity to ask Jürgen Hartmann which came first in the Mercedes-Benz car factory example he presented? His answer was clear:

“Without RFID, for us there is no Digital Twin.”

Ironically, our April Fool’s post from two days previous was about Digital Conjoined Twins where we joked that the digital twin resides in the optimal location: adjacent to the physical entity that it represents. Perhaps not so silly in the context of industrial applications highly sensitive to latency???

RFID projects championed by the organisation’s finance department?

That is exactly what Joachim Wilkens of C&A argued. The success of their retail RFID deployment was in direct consequence of the C-level being on board, but more importantly by having a business case championed by the finance department:

“This is not an IT project, this is a business project.”

While we’ve observed our fair share of tech-driven deployments over the past few years, we’re increasingly seeing measurable business outcomes. For instance, a recent workplace occupancy deployment delivered, within months, a 15% savings in real-estate. That is a business project—one the finance department would love to repeat!

IoT: the next generation

What will we discuss in our RFID Journal Live 2029 blog post when the IoT celebrates its third decade?   That may well be in the hands of the next generation.   Since we began attending the co-hosted IEEE RFID and RFID Journal Live in 2013, we’ve observed a slow but steady shift in demographics. A younger generation—one which grew up with the Internet—is succeeding the generation instrumental in the development and commercialisation of RFID. On the showroom floor, we’re talking about the Web and APIs. At the IEEE dinner we’re discussing industry-academia collaboration to teach students about applications and ethics. And in the IEEE workshops, ASU Prof. Katina Michael took the initiative to invite one of her undergraduate students to argue the (highly controversial) case for implantables, effectively ceding centre stage to the next generation.

RFID's next generation is coming of age

The final print copy of RFID Journal we received back in 2012 is entitled “RFID’s Coming of Age”. Today I would argue that RFID’s next generation is coming of age. 1999 saw the emergence of the terms IoT and Web 2.0. Might we expect 2019 to mark the emergence of the term RFID 2.0?

The Web turns 30!

The World Wide Web turns 30 years old today, March 12th 2019, and its inventor, Sir Tim Berners-Lee, has asked its citizens to help build a timeline of the web’s history. Given that reelyActive has existed on the Web for about a quarter of its history, we thought it’d be fun to look back at our own Web presence since 2012.

2012: First landing page

Our first landing page showed off our newly minted hardware — purely in monochrome — and provided guided visitors to either a serious or a silly site. In 2012, real-time location of individuals (aka tracking people) was controversial, and the silly site was intended to show visitors that we were indeed looking critically at ourselves and the platform we were developing. Indeed there’s still a sillyActive filter on this blog!

2012: The “serious” site

Yes, this was our non-responsive-design, fully monochrome, “serious” developer-oriented website in 2012. Fixed 960px width because who needed more resolution than that, right? We had just learned PHP, which was applied generously throughout. Credit to Twitter and LinkedIn for the social plugins in the footer that are still functional at the time of writing!

  ↪ Journey back in time to www.reelyactive.com/serious/

2013-2014: The “corporate” site

Finally colour! And JavaScript animations! Curiously enough, our first major experience with JavaScript was server-side with Node.js, which then gave us the confidence to apply it client-side, in this case with the TweenMax JS framework. Despite its non-responsive 960px formula, this website received a lot of appreciation for the Technology and Applications animations which were refreshingly accessible to a broad (non-native-English-speaking) audience.

  ↪ Journey back in time to www.reelyactive.com/corporate/

2014-2016: The “context” site

By 2014 we had finally gone responsive, abandoning ground-up web design for the lovely Creative Commons frameworks from HTML5 UP that we continue to use to this day. There was even a full-screen explainer video and fresh graphics from local designers.

  ↪ Journey back in time to context.reelyactive.com

2016-present: Our current landing page

Our current landing page has benefited from many iterations, and the most powerful changes are the ones behind the scenes. Many of the pages include linked data in the form of JSON-LD, which became a Web standard in 2014. This is what allows Google and other search engines to extract rich content, using Schema.org as a vocabulary. All of our IoT initiatives, such as Sniffypedia.org, rely heavily on these latest standards which are extending the reach of the Web to the physical world.

  ↪ Visit our webpage at www.reelyactive.com

Thank you Sir Tim Berners-Lee for your enormous contribution to humanity in the form of the Web, and to everyone who has contributed to its evolution and maintenance these past three decades. We were pleased to give a friendly nod to your contributions in our most recent scientific publication, and we hope that Mrs. Barnowl will have the chance to meet you and thank you in person, as she did Vint Cerf for his contribution to the Internet!