3rd Network Economy Forum Workshop @LSE

On Friday 11th January 2013 we held the 3rd Network Economy Forum Workshop @LSE. The workshop programme has been distributed exclusively to invitees and followed Chatham House Rules as in our previous workshops. Here, we summarise the main debates, questions and key point from the various keynote speakers that participated in the event.

 

Workshop summary Third LSE Network Economy Forum
11 January 2013

The third LSE Network Economy Forum addressed the theme “Digital Service Delivery with New Internet Architectures: Policy and Business Strategy Critiquing  Telecommunication Infrastructure and Business Models”. Among the 45 participants there were experts from regulators, telecom operators, internet application firms, financial analysts, and academic researchers. Our discussion was divided into sessions on policy and industry. As we follow Chatham House Rules, this summary will describe issues raised without attribution. The workshop was chaired by Dr Jonathan Liebenau, Reader in Technology Management.

The first session on policy was led by Prof James Alleman (University of Colorado, Boulder and CITI, Columbia University), who had prepared discussants on topics ranging from how the European Commission is keeping up with the commercial realities of change on the internet and the extent of the European single market for digital services.

A representative from the EU Commission responded on how the Commission is keeping up with commercial realities, where it perceives business and consumer practises changing, how innovation in the network affects services, and what fragmentation in the single markets needs to be overcome by more consistent regulation focusing only on bottlenecks. The European Commission released a study in February 2012 showing that, if the internal market for electronic communications were completed, the EU gross domestic product (GDP) could grow by up to 110 billion Euros a year, or more than 0.8% of GDP. Some participants representing the perspective of consumers raised concerns about how the European Commission’s efforts focus on infrastructure investment in fixed networks (fibre) and mobile (4G/LTE) to a much greater extent than on services. The difficulties for businesses launching pan-European services seem to be due to sprawling national regulation, the lack of multinational licensing and inconsistent financial policies for spectrum allocation for mobile telecom services. Some participants suggested that the current regime struggles to deliver to business users. A question raised by several participants dealt with how much regulation is needed for policy making to support functioning markets delivering social benefits. The regulatory community pointed out that the discussion needs to be separated into issues of competition and those of regulation, and a reassurance that markets should not be created by regulators. An incumbent response was that specific regulation of business usage would take a very long time to outline and apply and that large incumbents already see challenges grappling with existing regulation. The EU Commission view seems to be that the current framework could address the needs of business users if applied better by national regulatory authorities (NRAs). The position of investors seems to be that perhaps they prefer fewer regulators in Europe, but also recognition that national markets encompass large internal diversity. Diverging views were raised as to the utility of defining sub‐national markets: pro arguments include the possibility to identify sub-markets that behave differently, hence in need of various levels of remedies (in functioning markets presumably none). The contrary arguments to the sub‐national debate mainly focused on the problem that further regulatory complexity is an additional obstacle to any transnational evolution of internet services. From the regulatory community it was noted that across Europe there is a need to “depoliticise” regulation, as there is too much unhealthy overlay of regulatory principles with political goals; regulators across Europe have also identified a range of “endangering” practises by influential firms that could limit the control that users have over services.

Following the coffee break, Prof William Lehr (MIT) gave a keynote speech on usage costs, interconnection, regulation, and thoughts on policy/research challenges going forward. Prof Lehr concluded that CDN payments to ISPs do not, in themselves, imply an abuse of market power and may be consistent with efficiency. However, such payments or JL_WL_IMG_7931interconnection terms might represent an abuse of monopoly power and hence suggests a regulatory need for continued monitoring. It is also reasonable to expect ISPs to seek adjustments to ensure adequate cost recovery, and an obvious candidate for such changes in pricing models would be a shift toward usage-based pricing. Further effort needs to be made for the community, including regulators, to figure out what data will be necessary, depending upon which problems pose most significant difficulties. In addition to work with new types of datasets, new metrics and presentation/analysis tools are needed. This will require multi‐disciplinary collaborations crossing private/public boundaries, academic disciplines, and business/industry boundaries. Some specific topics worthy of immediate attention include better tools/insights to make sense of current Internet interconnection topologies, traffic patterns, and business practices. The Forum chairman confirmed that these topics are also the focus of current LSE Tech research, and that if regulators want to have tools for ex-post intervention, continuous data gathering is crucial.

The second session on industry was chaired by Prof. Lehr, who addressed amongst others the question, “are new industry configurations able to accommodate the changing character of the internet,” and “what inhibitors to competition are likely to plague the sector in Europe for the coming decade?”

The incumbent community stressed the need for a consistent approach to support the convergent market: current regulation provides an uneven competitive level-playing field at service level due to different national regulations. The discussion also raised the possibility of global offers and considered that different network technologies must be deployed in different sub-national areas to enable cost optimization. Network management models were also addressed, as many supported the view that they must be allowed to co-exist both within embedded network switching and overlaying the network as long as two conditions are fulfilled: competitive retail markets and transparency. An investor commented on the difficulty of “origination” versus “termination”; the industry needs to accept that Google and Netflix and their like create services that users demand and this will not change in the foreseeable future. Further investor observations included that current competition is structured around “resellers” of mobile, copper (e.g. unbundled local loops) and content. This means that initial margins of “resale” have fallen, indicating that regulation has worked, and now the industry’s attention should turn to scale. However, players such as Google and Apple can now use apps to arbitrage, so the industry needs to rethink the “structure” of competition. Another investor underlined that the environment in 2013 differs from 2000 since capital is scarce and many industries compete for it. It was also noted that the investor community generally prefers markets with less regulation when making allocation decisions.

On questions of regulatory frameworks, one operator pointed to evidence in certain non-European markets that ex-post competition monitoring and enforcement provides an alternative to restrictive ex-ante approaches. The discussion then centered on business models and a distinction between “dumb networks” that could be run separately as a utility, and retail activity that benefits from scale. One participant claimed that operational expenses for retailing network capacity are consistent across operators in Europe. An operator noted that there is a clear need to evaluate thoroughly the efficiency of two operational modes: to keep building parallel and separate transport capacity (infrastructure) and to implement mechanisms that allow differentiation of specific traffic flows on a “single infrastructure” (i.e. the public Internet). A content provider commented on getting this right and drew on a parallel with the travel sector to make sure that flights depart at the same time for both economy class and first class passengers. Usage based pricing was also discussed, where one regulator questioned if users really want this. Prof Alleman responded that findings from a recent survey in the US showed that a majority of trial users preferred usage based pricing, but only after 12 months of being exposed to it. This underlines the learning curve involved for users and the importance of long-term and sustainable business models seen from the perspective of users. An operator pointed out that if content providers paid small fees for data usage it would incentivise them towards traffic efficiency (e.g. for video traffic). A service provider commented that it is indeed important to find win-win propositions, but increasing distribution costs could affect resourcing for content creation.

The NEF Chairman finished by emphasising the research community’s role in facilitating evidence-based debate and data gathering that can support efficient ex-post regulation through efficient monitoring of the internet. Such network resources are provided by the LSE Network Economy Forum, by MIT, and by some within the industry itself when making data and tools publicly available.

We welcome your comments! Please take a look at our comments policy

Leave a Reply

Your email address will not be published. Required fields are marked *