Large scale trials versus pilots – What is the difference?

The notion of trial as is commonly used to describe activities in the ICT discipline seems not to differ sufficiently from experimentation as this is used in various documents related to research and innovation. In the same context also testing is being frequently used. However testing is much better defined through ETSI and refers to (i) conformance testing or (ii) interoperability with quite accurate definitions.

It is suggested to use the terms more consciously and with a precise definition in mind. For this purpose the term trial should be used when conducting activities that aim to verify the functionality of a system or parts of it. So it should refer to cases where the correct functionality is the primary interest.

However towards market deployment the stakeholders are much more interested in the interaction with the intended customer of a service or device (a product) and in particular whether and under which conditions the customer is prepared to engage in a business relationship for using the product. In other words if we add the business dimension in the trial activities then we have a pilot that could be defined as follows:

  • A pilot is the execution of a trial including business relationship assumptions, exemplifying a contemplated added value for the end-user of a product

Furthermore, often the attribute of scale (e.g. large-scale trial, large-scale experiment) is used without further precision. Nevertheless scale can refer to the scope or extend of a trial and large can imply heterogeneity based on the assumption that large-scale exceeds the borders of a single laboratory environment. However it is more useful to use the cost factor as a measure, whereas cost is a function of complexity, dimension and environmental conditions. One possibility is to postulate that large-scale denotes an environment that exceeds the cost of a laboratory environment at any given point in time by one or more levels of magnitude.

For the sake of completeness the sequence of maturity of research and innovation artefacts is: proof of concept –> prototype –> demonstration –> trial –> pilot –> commercial product.

In the scope of the 5G-PPP programme and having the previous definitions in mind it appears appropriate that phase I is scoped among others by “proof of concept”, and that phase II is scoped by “Prototypes, technology demos”. “Large-scale demonstration and trials” are appropriate for phase III. Most importantly the right scope for “pilots” is phase III.

In order to introduce the right level of interaction with the customer the necessary “prototype” business relationships have to be “piloted”. As a consequence I see Large-Scale Pilots (LSPs) and especially pilots as the elements of a viable and prosperous 5G-PPP-enabled innovation ecosystem and a stepping-stone to operational 5G ecosystem innovation.

Posted in 5G infrastructure, Commentary, Innovation, Uncategorized | Tagged , , , , , | Leave a comment

The emerging integration of satellite technology into the 5G infrastructure – Markets and stakeholders

Satellite communication

Communication satellite (photo courtesy of ESA – copyright, 2016)

In many cases, the telecommunication and satellite markets can be seen as mature with little or no opportunities for sustainable competitive advantage due to limited potential for differentiation. The technology is stable and well diffused, and it is easy to enter the market due to developed infrastructures. Virtual network operators are established in the terrestrial case and the trend is emerging also in the satellite case. There is high international competition making the domestic cost advantage vulnerable. The sources of cost advantage are economies of scale, low-cost inputs and low overheads. Concerning the sources of innovation there is limited opportunity for product and process innovation but considerable opportunity for strategic innovation. Finally a strong trend for consolidation and alliances can be observed.

The sources of strategic innovation are reconfiguration of the value chain, redefinition of markets and products, and new approaches to differentiation. However, strategic innovators are often new market entrants (CNN in news broadcasting), existing firms at the periphery (Google starting network services) or firms from adjacent industries (Apple in consumer electronics). Most strategic innovators that serve as examples are non-European. Among others, this is the reason that the European Digital Single Market (DSM) has been identified by the European Commission as one of its 10 political priorities. The DSM aims to open up digital opportunities for people and business, ensuring that they can seamlessly access and exercise online activities under conditions of fair competition, and a high level of consumer and personal data protection, irrespective of their nationality or place of residence.

The current efforts towards satellite/5G integration are mainly technology driven and exhibit behaviours of protectionism with respect to markets (e.g. broadcast content rights) and assets (e.g. spectrum). These behaviours are not in line with the DSM objectives and lead to restricting the attention of the key involved stakeholders to the territories that have always been dominated by them. As a further result, terrestrial network operators may see satellite operators as competitors in many market segments and vice-versa. However this attitude prevents strategic innovation, and there is a significant risk that other stakeholders from other domains will enter the market with profitable services, just because they better understand where the value add lays.

In some respect the situation is similar to the effect of the telecoms market deregulation and liberalisation that allowed many so called Over-The-Top (OTT) service providers, to harvest the value add on top of existing available networks. The reaction of the incumbents was rather conservative and prevented them to look forward and innovate.
So, the question in the discussion should rather be: “Are we talking to the right stakeholders?” The attention should be diverted to identifying the right stakeholder roles in the value chain and provide the companies that assume these roles the right service in their role as customers of the infrastructure. In fact this stakeholder role(s) may not exist yet or may have emerged only recently.

Assuming the simple three layer model (Reflected today in the NIST cloud model, IaaS, PaaS, SaaS), where the infrastructure is at the bottom, and the application is at the top, there is a middle layer which is responsible for shielding the complexity of the infrastructure from the application. In the market today some stakeholders have understood that there is value in providing the middle layer. Most stakeholders that are positioned at the infrastructure layer try to customise their service offering towards the application layer on demand, which is not an efficient approach.
The application layer requires an aggregation of different infrastructure services that today cannot be sourced by a single company. As a consequence one could position in this middle layer the competency of “stitching” together different technologies, network segments, data processing and storage functionality to the extent that it meets the requirement of the application layer. The NGMN 5G white paper names this “stitched” environment a “slice”.

At a first approximation Google, Facebook and others can be seen at this middle level, even if it could be argued of whether they should be positioned at the application layer. A trend that can be identified is that these stakeholders have started since some time to deploy own infrastructure. In this context Amazon should be positioned at the infrastructure layer. We may now ask “Why are they doing so?” There are at least two reasons:

  1. There is no sufficient global availability or coverage of existing infrastructure
  2. The complexity of “stitching” together a “slice” that meets their requirements is too high

As a consequence the customers for both the terrestrial and satellite infrastructure providers are the companies that are able to “stitch” together and manage efficiently “slices”. These companies may be some of the existing infrastructure stakeholders, e.g. we see Telefonica, Deutsche Telekom and a few others aggressively trying to position themselves in the middle layer.

A completed DSM could contribute € 415 billion per year to Europe’s economy, according to the European Commission. Attracted by the huge potential market size, it is likely that new entrants will enter the market, for example with the ambition to assume a role in the middle layer. This assumption is fuelled by the strong trend of the complete softwarezation of the network which inherently bears the risk that information technology specialists will dominate this layer. The above observation for Google and Facebook basically supports this claim. Google is today the largest user of SDN (Software Defined Networks) in its datacentres. Facebook is perceived today by most of its users as a platform, which would inherently map to the middle layer if applying the NIST PaaS model.

Whether or not it is in the business interest of the terrestrial and satellite stakeholders to provide seamless integration of their infrastructures depends on the ambition of each company to enter the middle layer and offer highly efficient “slices” to its customers leveraging its own competencies in providing services optimally customised for each and every set of use cases and easily sourcing in the parts of the “slice” that lays outside its own competency. In this context it should be noted that the competencies for in-sourcing and out-sourcing lay mostly with information technology companies, posing an additional threat for infrastructure providers.

Posted in 5G infrastructure, Satellite, Value Chain | Tagged , , , | 2 Comments

Is the Internet of Things business case broken?

The Internet of Things (IoT) promised many benefits in terms of new applications and in particular new opportunities for a substantial change in societal behavioural patterns. And indeed we have witnessed many exciting new technologies and applications that are enabled by the IoT. In a Eurescom message article from 2010, I questioned whether there is actually an Internet of Things, because at that point in time it looked like we had already the billions of sensors connected to the network, yet the vendors and operators managed to keep the networks up and running. Now it is time for a reality check and a question about the sustainability of the current approach.

Internet of ThingsCost for connectivity

In terms of cost for connectivity, I think we have not progressed much since and maybe we have even done a few steps backwards. Today virtually all IoT applications are based on some sort of client/server principle in which there is a substantial computing capacity in a virtualised computing environment to which each single sensor and actuator is connected somehow. This means that beyond the investment needs for the IoT enabled world (Capital Expenses – CAPEX) in the frontend, there is a substantial OPEX (Operational Expense) and CAPEX in the backend support for IoT enabled applications. Assuming an average lifetime of three to five years for the hardware delivering the computing capacity we are facing a disconnect with respect to the lifetime of other supporting infrastructures, such as networking where the depreciation time of infrastructure investments is in the order of 10-20 or more years, although shrinking. This means that the cost of supporting and serving hundreds of billions of smart objects in the long term is considerable and may not be included in the current assumptions about Total Cost of Ownership (TCO).

Security and trust

In terms of security and trust, I have not seen to date a future proof concept; even less a concept that is economically viable at the anticipated scale. The traditional model in which IT domains are organised and protected centrally by some gatekeeper will not work for the IoT world for the reasons induced by the extreme decentralisation of most IoT enabled systems. We cannot protect each smart object individually on an economically viable basis.

In the area of trust falls also the practice of many vendors delivering smart products that collect data about the behaviour of their customers in the hope that these data may be an exploitable asset. This is a serious and to date underestimated problem. In many cases individuals may not care (even if they knew) about the practice. However enterprises care a lot and sometimes have the means to discover and resist the practice. At least in Europe the trend in legislation is to strengthen the rights of citizens and businesses with respect to the control of their data.

Furthermore, a discovered vulnerability is a product defect and must be fixed. Traditional industries have been hit very hard economically in cases where recalls are necessary (e.g. the car industry). In the ICT industry the strategy is to distribute software updates, which nevertheless puts an increasing cost to the long term maintenance of IoT enabled applications. The fact that many new IT devices have a short expected lifetime motivates vendors to drop older devices from their maintenance roadmaps so that they do not receive security updates anymore. How will such devices be protected in the future? Do we need to replace our energy smart meters, digital doors locks and smart connected cars every three or five years when the vendors stop delivering security updates?

Future-readiness of IoT

In “Cyber physical systems” (CPS) IoT devices are directly connected to real world artefacts and have a substantial influence on their properties. Where homes, buildings and factories are smartened, it is a reasonable expectation by the customer that the devices used in this context have an expected lifetime in the same order of magnitude to be future proof. This is in the order of 30-50 years and beyond. In many cases different generations of products, that appear every 2-3 years, have different maintenance requirements and different ways of servicing. How will vendors and suppliers be able to cope with the long term cost of maintenance? Dumping the long term cost on the customers will not work, because when the early adopters discover that the TCO of a smart fridge is an order of magnitude higher than that of the stupid old fridge, they will start to question the added value of the smart world.

The hype about IoT and CPS has triggered many “innovations” in the market for which the added value for the customer is questionable. Vendors may think in terms of better maintenance and service for the benefit of the customer or about warranty tracking for the benefit of their own supply chain optimisation. But a smart coffee brewer, a smart toaster, or a smart water boiler, do not provide added value to the customer unless they make better coffee, better toast of better hot water! Certainly I don’t want to devalue many useful applications of the new technology, but it is always a simple and clear value proposition that decides about the market success of an IoT enabled product or service.

All above deficiencies culminate to an uncomfortable truth that the current business models around IoT might be broken. Of course this is the pessimistic view. An optimistic view formulates the problem such as that no one has yet found a viable and sustainable business model for the large scale. In order to progress the search for viable business models, the discussion of the broader issues above, gives us an indication that we have to face issues in four dimensions; namely (i) technology, (ii) business, (iii) policy and (iv) last but not least customer. All four dimensions are a source for requirements that have to be satisfied at the same time. Smart city experiments around the world are a good starting point to learn to deal with all these requirements, since all different dimensions are prominently present in most scenarios. However all smart city pilots that I have encountered to date are just that: pilots. None of these experiments claims a self-sustainable operation that is ready for the long term in the city scale.

In order to develop sustainable business models, they should not be designed around the traditional understanding of value chains, but should rather be designed with the flexibility to cope with value networks that emerge in digital business ecosystems.

Posted in Innovation, Internet, Internet of Things, Value Chain | Leave a comment

Proposing Horizon 2020 projects – why we need a more efficient process

euro-flagJust two days after Easter was the deadline of the first ICT Call of Horizon 2020, the current Framework for European funded R&D projects, with the official name of “H2020-ICT-2014-1”. As expected the interest from the European research community was immense. In total 1,645 proposals were submitted, of which 1,106 were so-called “Research and Innovation Actions”, 375 were “Innovation Actions” and 164 were “Coordination and Support Actions”. If we are lucky, 150 – 200 projects will eventually be funded.
The huge number of submitted proposals did not come as a surprise. After all there has not been a large ICT Call for quite some time, and the Horizon 2020 funding conditions are for various types of organisations more favourable than the FP7 funding conditions. Fact is that there is a potential overall “success rate” of roughly 1 in 10 proposals – interestingly quite similar to the last large FP7 ICT Call more than a year ago (please read my Blog article of 17 January 2013). The potential success rate varies a lot depending on the different Work Programme topics. It can be lower than 5% (i.e. less than 1 in 20), for example in digital gaming; but also nearly 50%, for example in novel material for OLED lighting.

Let’s discuss some possible conclusions.

The relation of proposal-time to project-time.
If we roughly assume that the average effort for preparing a project proposal takes about 2 person-months (a quite moderate assumption), the total effort for preparing and submitting the “H2020-ICT-2014-1” proposals was about 330 person years. The total funding is 658.5 Mio €. If we assume very moderate 100 k€ average cost per person year, we get about 6,600 funded person years in Call 1. So the ratio of proposal-time to project-time is around 5%, which is quite a high level. On the one hand we need a transparent and open process to achieve the selection of the most promising European projects, but on the other hand 330 European ICT researchers are busy with preparing proposals, of which only 10% are successful – whilst the researchers in other regions are solving their research problems.

The self-fulfilling proposal inflation.
Based on experiences with earlier calls every proposer knows that there is a very low success rate for FP7/Horizon 2020 proposals. The statistics tell that for every successful project statistically it needs about ten submitted proposals. This means that proposers tend to prepare more proposals than they would if there was a higher chance for success. If everybody acts like this, it drives up the number of proposals automatically in a vicious spiral. The only way out of this is a change in the proposal process (see below), or a common agreement amongst the European research community to only concentrate on the strongest proposals which an organisation can really handle. The latter is very unlikely, though.

The influence on the motivation of the proposers.
I don’t believe that the influence of the low statistical success rate on the motivation during the proposal phase is very high. If a team has once decided to work on a challenging proposal, it does not have the success rate in mind. On the contrary: knowing that there is fierce competition, and that only the very best proposals will be funded, could even increase the motivation to submit an excellent proposal. On the other hand, many potentially valuable proposals might not be considered at all, because various potential project teams might decide that with such a low chance for success it is not worth the effort. After the proposal phase there are usually many frustrations if really good and well elaborated proposals have not been selected simply because other proposals were ranked higher – not always with the full understanding on the selection criteria.

Alternative ways for funding European activities.
There are certainly other ways, probably not as transparent and open, but with less effort to the research community for the proposal process, for example:
• A two-stage-proposal process, where in a first stage only short descriptions of the project idea are required. The disadvantage is that a two-stage-proposal selection process takes longer, and the decision on the short first stage description is not so decisive.
• Performing more R&D work through tenders, where the funding party clearly describes what shall be done. This can lead to much more targeted results in targeted areas of benefit for Europe and its economy. Examples for such a system are ESA in Europe or DARPA in the US.
• Funding other European bodies for performing targeted European R&D. A good example for such a method are the standardisation mandates between the European Commission and European standards organisations like ETSI, CEN or CENELEC.
• Also the Celtic-Plus EUREKA Cluster is a perfect example of identifying and selecting good projects. Celtic-Plus has a much lighter process for proposing projects; however the big challenge there is to get the national-based funding after a project proposal has been approved (please read the Blog “FP7 Call Frustrations – Now What?” of 6 May 2013).

I would recommend that the European research community thinks about possibilities for lighter proposal processes requiring less overall experts’ time for preparing proposals, of which 90% are not funded.

Posted in Uncategorized | Leave a comment