Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Thursday, September 26
 

12:00pm

Lunch and Learn
We will hear from leading scholars on topics, including:  Co-moderators: Nicol Turner- Lee, Minority Media and Telecom Council and TPRC Board Member Gigi Sohn, Public Knowledge and TPRC Board Member   Panelists: Krishna Jayakar, Penn State University, “Broadband and Unemployment” Scott Jordan, University of California at Irvine, “Broadband Data Caps” Michael Mandel, Progressive Policy Institute, “Data, Trade and Growth” Judith Mariscal, CIDE, “Socioeconomic Impact of Mobile Devices” L. Jean Camp, Indiana University, “Efficient Methods to Guard Against Online Risk”

Thursday September 26, 2013 12:00pm - 1:30pm
Russell Senate Office Building
 
Friday, September 27
 

2:00pm

Intelligent Mobile Devices and their Impact: Perspectives, Lessons, Issues and Challenges
Download Paper

This panel will focus on the socio-economic impact of the cell phone and other more intelligent mobile devices in both developing and developed countries, the role that wireless access and mobile broadband play in various national and regional broadband strategies, and how is it integrated with the wireline component of such strategies. The proposed panelists will discuss strategies being used in Australia, the EU, the US, South Africa, Latin American countries like Brazil and Mexico, South Asian countries like India, among others. We wish to find out what has worked, what did not, the problems encountered and whether there are lessons to be learned that are of general applicability, as well as for the US and Canada. At the same time, we would like to explore the possibilities and limitations of learning from other nations’ and regions’ experiences.

Moderators
PN

Prabir Neogi

Carleton University, Canada-India Centre for Excellence

Speakers
R

Rekha

Executive chair, Telecom Centre of excellence, Indian Institute of Management Ahmedabad
Broadband, Internet Governance, Spectrum Auctions
JM

Judith Mariscal

Centro de Investgacion y Docencia Economica
CM

Catherine Middleton

Canada Research Chair, Ryerson University
Ryerson University -
JS

Jean-Paul Simon

IPTS, European Commission


Friday September 27, 2013 2:00pm - 3:30pm
GMUSL Room 120

2:00pm

MOOCs and Online Learning: Digital Disruption in Action?
Massive Open Online Courses MOOCs epitomize the fundamental changes impelled by innovations in communications and technology. Already, millions of students are taking free online courses, and most of the world's top universities have joined MOOC initiatives.  Some predict MOOCs will completely disrupt universities in the next few years. Despite all this, there is great confusion about what MOOCs are, how they relate to earlier forms of online learning, and their implications.  This roundtable will highlight the unique elements of MOOCs; their likely effects; and the important legal and policy questions they raise.  The panel will provide an opportunity for the TPRC community to gain a better understanding of the budding revolution in online higher education, and make connections to potential research opportunities in the communications and America information policy fields.

Moderators
KC

Kevin Carey

New America Foundation

Speakers

Friday September 27, 2013 2:00pm - 3:30pm
Founders Hall 111

3:30pm

Coffee Break
Friday September 27, 2013 3:30pm - 4:00pm
Atrium

4:00pm

Impact on Broadband Adoption: Evidence and New Research Directions from Latin America
Download Paper

A recent debate has emerged about the contribution of broadband and related technologies to achieving development goals. The relevance of this debate is amplified by the ambitious national broadband plans set forth by governments around the world, which involve large public investments in broadband infrastructure, applications and services, as well as training.

This panel presents the results of four impact evaluation studies regarding the effect of broadband investments and adoption on key development outcomes.  This panel will also present the results of five studies that measured the development impact of broadband. 

We will also present the results an exploratory study that identified how the poor obtain, share, and utilize information and communication resources in their everyday lives – the Information Lives of the Poor. 

Moderators
Speakers

Friday September 27, 2013 4:00pm - 5:30pm
GMUSL Room 120

4:00pm

Is Common Carriage Still Relevant?
Download Paper

Communications is increasingly all-IP and flows over cellular or broadband networks. A great deal of it is mediated not just traditional "carriers" but also by platforms such as Skype, Facebook, Twitter, or Google. Where does this leave the concept of common-carrier networks? Primary features of telephone common carriage included regulated monopoly where each customer was to be able to call every other customer at clearly defined and nondiscriminatory tariffs. In a time when a circuit-switched "call" is increasingly archaic and customers generally have choices, is common carriage merely an anachronism?

This panel will discuss how common carriage has been applied in the IP world and whether and how it should be applied in the future. Up to now, common carriage's most important modern equivalents have been "open access" and "network neutrality." These relate to two quite different relationships, the first between physical infrastructure and communications services, the second between service providers and content.

Moderators
CG

Carolyn Gideon

Asst Prof Int'l Communication and Tech Policy, Tufts University

Speakers
CH

Christiaan Hogendorn

Wesleyan University
MJ

Mark Jamison

University of Florida - PURC
CY

Christopher Yoo

University of Pennsylvania


Friday September 27, 2013 4:00pm - 5:30pm
Founders Hall 111

5:30pm

Reception and Poster Sessions
Friday September 27, 2013 5:30pm - 6:30pm
Atrium

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

5:30pm

6:30pm

Dinner
Friday September 27, 2013 6:30pm - 7:15pm
Multi purpose Room

7:00pm

 
Saturday, September 28
 

9:00am

Spectrum Floors in the UK 4G Auction: An Innovation in Regulatory Design
Download Paper

This paper evaluates the regulatory approach to competition issues in the UK 4G spectrum auction which involved the innovative use of spectrum floors, the flexible reservation of portfolios of spectrum for either a new entrant or the smallest incumbent national mobile competitor (H3G). Spectrum floors have two dimensions of flexibility: different portfolios of spectrum can be reserved for different players, eg depending on their pre-auction spectrum holdings; and the choice of spectrum to be reserved from a range of portfolios, each of which is sufficient to promote competition, is decided through the auction as the floor that minimises the opportunity cost of reservation.

The use of spectrum floors appropriately emphasised output market efficiency through promoting downstream competition. But its implementation in the auction sought also to maximise auction efficiency (subject to the constraint of the floor) through sophisticated modifications to the already complex design of the combinatorial clock auction. This approach represented a balance between the risks of market and regulatory failure. On the one hand, H3G or a new entrant might fail in the auction to acquire the spectrum required for promotion of downstream competition because of asymmetries in intrinsic value compared to social value or strategic investment by the larger incumbents (EE, Telefonica and Vodafone) to deny spectrum to competitors. On the other hand, using the traditional approach of spectrum set-aside there might be a high opportunity cost of reservation through choosing the wrong spectrum to be reserved.
In the UK 4G auction held in January to February 2013 the full potential of spectrum floors was not, in the event, tested because there was no competition between bidders eligible to obtain reserved spectrum - H3G was the only such bidder willing to pay the reserve price of the spectrum floors. However, there was still competition over the choice of spectrum floor for H3G, which resulted in an outcome that some considered surprising. H3G won the spectrum floor of 2x5MHz in the higher-value 800MHz band, ie scarce sub-1GHz spectrum, rather than 2x20MHz in the higher-frequency 2.6GHz band. By analysing the bid data, a comparison of these two floors shows that the marginal value in H3G?s package bids exceeded the overall opportunity cost to other bidders. This outcome was not a fluke or an aberration as it clearly reflected the consistent pattern of bids made in the auction. In particular it reflected H3G?s strategy of bidding marginal values for the floor packages equal to the differences in their reserve prices, which guaranteed that it would not pay more than the reserve price for its winning spectrum floor; and EE?s consistently aggressive bidding at the margin for 2x20MHz of 2.6GHz compared to 2x5MHz of 800MHz.

Based on the bids made in the auction it was clearly more efficient for H3G to win the spectrum floor of 2x5MHz of 800MHz, even if this might not have been the prediction before the auction. With the important caveat that some of the bidding may have been strategic and deviated from bidders? true intrinsic values, this suggests that one of the key elements of flexibility in the new regulatory tool of spectrum floors proved to be important the first time it was used in the UK 4G auction. This flexibility avoided the regulatory failure that could have occurred with simple spectrum set-side of the regulator reserving less efficient pre-specified spectrum in the false belief that would minimise the opportunity cost of spectrum reservation.

Moderators
Speakers

Saturday September 28, 2013 9:00am - 9:35am
GMUSL Room 221

9:00am

Broadband Internet in the Chinese Crisis Economy
Download Paper

If the United States represents a full integration of monopoly-finance capital and the Internet, China, as a latecomer country, is poised to adopt a similar goal of making broadband more central to the Chinese style capitalism in the aftermath of the 2008 global economic crisis. A review of recent trade journals can provide a sense of the new trends we are already seeing: Telecom operators have competed to expand their wired and wireless broadband networks, recently under the state?s new national broadband strategy. Broadband is expected to become the platform for trade in software and IT services, such as logistics, E-commerce, and consulting. Meanwhile, cloud computing and the Internet of things, the two ?most promising? ICT applications, are equally seen as the new, trillion-dollar information service industries, capable of transforming the wider economy and social life.

Important questions arise concerning what policy and structural changes it takes to create a broadband-based economy in China: Specifically, what changes do state-market interactions go through amidst the crises facing telecom operators and the Chinese economy in general? What are the major considerations in the state?s broadband policy? How do competing telecom operators concerned with their own interests also articulate their roles as the state?s internal rebalancing instruments? And how does the state, in tandem with telecom operators, strive to retain commanding heights despite changing business dynamics?

If the earlier phase of Internet development had facilitated China?s entry into the networked global production, broadband development, especially after 2005, expressed the state?s new desire to shift China from its export-driven economy to one more dependent upon domestic consumption. As a concrete means to achieve China?s anticipated escape from its ambivalent position in the global economic order and its ambition to participate more competitively in advanced global IT economies, broadband development involves more than just network upgrades. Indispensible are a destructive creation of telecom business models and a slew of systematic efforts to create societal demand for broadband-based applications. Indeed, without consumption, there is no growth and expansion. How will the Chinese government stimulate demand for broadband service and applications? So, while the industrial policy literature often assesses state intervention concerning resource input and capacity buildup, this paper aims to understand a demand-centric intervention. Specifically, what pivotal projects have been carried out to create a broadband-based economy? What verifiable policy, institutional, and structural changes have been pursued to make broadband commercially viable? And how do pre-existing structural, institutional, and social tensions define the parameter of future directions?

By relying on a wide combination of trade journals, newspapers, and government documents, this paper provides a critical-historical interpretation of China?s broadband project in connection to the crisis economy and China?s rebalancing efforts. After reviewing China?s Internet development since the mid 1990s, which had been driven by global structural forces, bureaucratic capitalist rivalries, and the state?s strong desire to incorporate China into world IT economies, the paper moves on to discuss 1) changing state-market interactions and accompanying tensions when renationalization of broadband development has become a global trend in the post-crisis context and 2) telecom operators? uneasy transformation into information service providers and the wider efforts to create demand for broadband-based applications.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Speakers

Saturday September 28, 2013 9:00am - 9:35am
GMUSL Room 225

9:00am

The Impact of Broadband Speed on the Household Income: Comparing OECD and Brics
Download Paper

This paper aims to measure the impact of broadband speed on the household income based on set of survey data conducted by Ericsson Consumer Lab in eight OECD and three BRIC countries in 2010 with the sample of study comprises a total of 20.000 respondents. The analysis is crucial as the broadband speed affects not only the end-users (increasing growth capacity, providing variety of services, and linking the users to other socio-economic variables, namely health and education) but also the supply-side, as the availability of speed also increases productivity and efficiency of firms. Moreover, the study is novel as most previous studies on broadband emphasize the penetration rate as the variable of interest.

This study adopts the framework of the return to schooling models (as Mincer, 1974 and Card, 2001). In this discourse, variety of estimations has been employed to capture the difference of schooling quality, and gap between male?female. However, the impact is difficult to be measured as, for instance, people are different in their tacit ability (skills and motivation). This study introduces ?additional skills and experiences? by presenting variables related to the access and use of ICT in addition to the standard return to schooling model where income is affected by education, skills (managerial competencies), and variety of socio-economic variables (e.g., age, gender, type of occupation, marital status, etc.). The access and use are believed to play important roles in increasing knowledge and skills as shown in many previous studies (James, 2011; van Deursen & van Dijk, 2011; Hargittai, 2010). In addition, the idea behind this approach is to eliminate the problem of endogeneity of speed variable and reverse causality that speeds subscribed by the users are in fact influenced by their income levels.

To operationalize this approach, a treatment effect model is employed using the Propensity Score Matching (PSM). The basic idea behind the method is to estimate the counterfactual outcome of the income for people who have connected to the broadband would have achieved had they not connected to the device. Two aspects are investigated ?the impact of the access to broadband on income and the impact of varying broadband speeds on income. On the access identification, the samples are one with the broadband access at a particular speed level against the other without the broadband access. On the speed upgrades, a comparison is conducted at various speed levels, e.g., users 2 Mbps, with users at 0.5 Mbps.

On access to broadband, the results in OECD countries show that gaining access to 0.5 Mbps would not be expected to yield an increased income as the threshold is somewhere between 2 Mbps-4 Mbps. For BRIC countries, on the contrary, the impact is already visible at 0.5 Mbps. Around 800 USD additional annual household income is expected to be obtained by introducing 0.5 Mbps broadband connection which is equivalent to 70 USD per month per household.

On speed upgrades, the speed level giving the highest benefit to income in BRIC and OECD are the same (4 to 8 Mbps), even though higher speed levels (8 to 24 Mbps) contributes more in OECD than in BRIC. Moving from 4-8 Mbps, the incremental income generated in OECD country is around 4% (with average income 37.000 USD) and around 1.5 % in BRIC countries (with the average income 10.000 and 12.000 USD for China and Brazil respectively). However, the BRIC countries can obtain higher impact by upgrading the speed only from 0.5 to 4 Mbps. At this scale, the countries will gain an additional household income of 2.2% for China and 4.7% for Brazil. Note that the survey was carried out in 2010 where the sample average speed level in OECD countries is only around 4-5 Mbps and 2 Mbps in BRIC countries.


Saturday September 28, 2013 9:00am - 9:35am
GMUSL Room 120

9:00am

Mobile Privacy Expectations in Context
Download Paper

An increasing amount of social activity and commerce is performed using mobile devices such as phones and tablets. These devices collect and transfer many types of mobile data, including location information, activities, motion information, text, and sound. In addition, individuals interact with mobile devices in new ways by integrating the phone in their daily activities. With new forms of data collection, consumers, organizations, and regulators struggle to address privacy expectations across a diverse set of activities on mobile devices.

This paper will describe findings from empirical research employing a context-based survey to understand consumers? privacy expectations for mobile devices across diverse real-world contexts. The study asks:
1) How do individuals? privacy expectations change between mobile application contexts?
2) What factors change these expectations?

To answer these questions, the project will test the hypotheses that (a) individuals hold different privacy expectations based on the context of their mobile activity, and (b) contextual factors such as who (the data collection actor, e.g. the application developer or mobile phone provider), what (data attributes, e.g. the type of information received or tracked by the primary organization), why (application context, e.g. games, weather, social networking, navigation, music, banking, shopping, and productivity) and how (the use of data, e.g. the amount of time data is stored or how that data is reused) affect individuals? privacy expectations. A promising direction of privacy scholarship ? privacy as contextual integrity (Nissenbaum, 2009) ? provides the theoretical backbone for the empirical work and posits that expectations about use and transmission of information are dependent upon the context. Individuals exchange information with particular people for a specific purpose, and expectations around what constitutes appropriate flows of information vary across such contexts. The theory suggests that tactics to address privacy expectations with mobile devices should depend on the context of the exchange.

To test the hypotheses and investigate whether and how privacy expectations vary across contexts in mobile activity, the researchers are conducting a survey using factorial vignette methodology (Wallander, 2009), in which respondents answer questions based on a series of hypothetical vignettes. This method allows the researchers to simultaneously examine multiple factors ? e.g. changes in context and types of information sharing ? by providing respondents with rich vignettes which are systematically varied. The mobile privacy survey is being piloted using Amazon?s Mechanical Turk to recruit respondents. After initial pilot runs, we will gather participants who use mobile applications using snowball sampling with a reward for completion.

This paper will report on survey findings that identify contextual factors of importance in the mobile data ecosystem. Because addressing privacy expectations for mobile devices is an explicit goal of US regulatory bodies, understanding how consumer privacy expectations change in different data use and business contexts can help regulators identify contexts that may require stricter privacy protections and help firms and managers better meet privacy expectations of users. Study results will be responsive to pressing government needs and societal concerns about mobile privacy and will have direct implications for researchers, business leaders, policy experts, and consumers. Finally, the study?s results will be compared to results from a parallel survey on online privacy and context to give guidance as to whether and how policies and practices about managing privacy should differ for the mobile ecosystem.

References:
Nissenbaum, H. (2009). Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford Law Books.
Wallander, L. (2009). 25 years of factorial surveys in sociology: A review. Social Science Research, 38(3), 505?520.


Saturday September 28, 2013 9:00am - 9:35am
GMUSL Room 332

9:00am

Enforcement and Spectrum Sharing: Case Studies of Federal-Commercial Sharing
Download Paper

Spectrum sharing has moved from being a radical notion to a principal policy focus in the past decade. This becomes evident as one compares the FCC?s Spectrum Policy Task Force (SPTF) report from 2002 [1] with the President?s Council of Advisors on Science and Technology (PCAST) Spectrum report [2]. The former report considers spectrum sharing as a possible option while the latter makes spectrum sharing a key strategy for spectrum access.

With the significant exception of license-free wireless systems, commercial wireless services are based on exclusive use. With the policy change facilitating spectrum sharing, it becomes necessary to consider how sharing might take place in practice. Beyond the technical aspects of sharing that must be resolved lie questions about how usage rights are appropriately determined and enforced. Weiss et.al. [3] developed a law-and-economics based framework for enforcement as it could be applied to dynamic access networks. While useful, this high-level discussion does little to assist policymakers on specific problems.

To address this gap, this paper examines particular cases in spectrum sharing and studies how the enforcement principles might be applied. The bands are some of those being considered by the NTIA for Federal-commercial sharing and have been discussed by NTIA?s ?Commerce Spectrum Management Advisory Committee (CSMAC)? (http://www.ntia.doc.gov/category/csmac). The first case examines the 1695-1710 MHz band, which is currently being used by the National Oceanic and Atmospheric Administration (NOAA) as primary user providing the weather satellite services (MetSat) downlinks (space ?to-earth). Since earth stations are stationary, it provides a good starting point for working out sharing, especially from an enforcement perspective. The second case focusses on the 1755-1850 MHz band, where the primary user is mobile and the identity and characteristics of the secondary user are known (commercial LTE). Finally, we study the 3550-3650 MHz band, where the primary user is high power shipborne radars and the expected secondary use is varied and includes unlicensed services.

From the PCAST report, it is clear that trust and accountability are primary barriers to spectrum sharing. Overcoming these barriers will require a clear approach to enforcement. As outlined in [3], various approaches to enforcement exist, each of which have opportunity costs as well as real costs. We explore the enforcement requirements of the various usage scenarios in the bands outlined above. For each of these, we explore the approaches to enforcement that may be taken and, to the extent possible, quantify the implications of each approach. We use these cases to sharpen the analysis begun in [3] and to provide specific guidance to policymakers.

Bibliography
[1] FCC, "Spectrum Policy Task Force Report," ET Docket No. 02- 135, Washington DC, 2002.

[2] PCAST, "Realizing The Full Potential Of Government-Held Spectrum To Spur Economic Growth," Washington DC, July 2012.

[3] M. Weiss, W. Lehr, L. Cui and M. Altamimi, "Enforcement in Dynamic Spectrum Access Systems," in TPRC , 2012.

Moderators
Speakers
avatar for Martin Weiss

Martin Weiss

Associate Dean, University of Pittsburgh


Saturday September 28, 2013 9:00am - 9:35am
Founders Hall 111

9:35am

Cellular Competition and the Weighted Spectrum Screen
Download Paper

If one or two cellular carriers gain control of enough spectrum, they may be able to prevent current rivals and potential new entrants from getting the spectrum needed to compete. Thus, regulators typically attempt to protect competition through some form of limit on how much spectrum any one cellular carrier can hold. The U.S. Federal Communications Commission (FCC) previously imposed a spectrum cap on the total bandwidth that a carrier can hold in any given market. They later replaced this with a spectrum screen. If a proposed transaction would cause the spectrum screen to be exceeded, this triggers greater scrutiny by the regulator. Final decisions as to whether the transaction would harm competition are made on a case by case basis after further investigation. With both the spectrum cap and spectrum screen, the amount of spectrum a carrier holds in a given market has been determined by simply adding the bandwidths of all licenses held in that market, regardless of the frequency band associated with each license. However, the physical properties of spectrum depend on frequency, and treating all frequency bands as if they were the same could potentially become more problematic as spectrum at higher and higher frequencies become available to cellular carriers, which is the trend.

This paper proposes the idea of measuring the amount of spectrum held in a given market with a weighted sum of the bandwidths of all licenses held, where weights are a function of frequency. The motivation for adopting such an approach is explored from both a technical and economic perspective. First, we quantify how the cost of deploying and operating a cellular network depends on the frequency of the spectrum used by constructing an engineering-economic model based on standard signal propagation equations. The more costs vary by frequency, the more reason there is to assign different weights to different frequency bands. We find that frequency has a tremendous impact on infrastructure cost in regions where population density is low, and a more modest impact in regions where population density is high. Some implications for handset cost are considered qualitatively. Second, we discuss the extent to which market valuations of spectrum licenses are dependent on frequency, relying primarily on past studies. We find that frequency is a very important factor in determining the market value of a license. Third, we present new analysis showing how the market concentration of cellular carriers in a given region relates to the concentration of spectrum holdings in that region, where spectrum holdings are measured using some form of weighted scheme and with the usual unweighted scheme. This analysis uses U.S. spectrum data that was recently made available by the FCC, and U.S. market data that became available as a result of consideration of the proposed merger of two U.S. cellular carriers: AT&T and T-Mobile. The results of this analysis shed light on the extent to which market concentration reflects spectrum concentration, and how this depends on the weighting scheme chosen.

Moderators
Speakers
JP

Jon Peha

Professor, Carnegie Mellon University


Saturday September 28, 2013 9:35am - 10:10am
GMUSL Room 221

9:35am

Implementing an Open Internet: Republic of Korea and U.S. Perspectives and Motivations
Download Paper

The Open Internet issue while not as hotly topical as a few years back continues to be a fundamental part of the policy debate here in the United States. Interestingly, in other countries, such as the Republic of Korea, this debate is very active, because of some fundamentally different implementation choices. In this paper we will explore these choices and the motivation behind them. 

In their Open Internet Order, the FCC adopted three basic rules in which broadband providers: must disclose network management practices; may not block lawful content/applications/services/devices; and may not unreasonably discriminate against lawful traffic. They further stated that mobile broadband providers may not block websites or applications that compete with their voice or video telephony services. Since the adoption of this Order, there has been little

In 2011, the Korea Communications Commission (KCC) developed their open Internet guidelines; however these guidelines proved insufficient to cope with the complex issues that arose when Samsung Electronics launched Smart TV and Kakao Talk launched Voice Talk (a free mobile VoIP service). One of the major mobile carriers, Korea Telecom (KT), decided to block Smart TV, and announced that it would block Voice Talk service unless subscribers or service providers pay extra. While the KCC initially took action on KT not to block Smart TV, in June 2012 it came out with a decision allowing operators to be able to restrict access to these mVoIP services for a rational traffic management, provoking an immediate backlash from subscribers and advocates.

After hearing divergent opinions through several policy debates, in July 2012, KCC published a tentative document, ‘The Guidelines for Reasonable Traffic Management & Use in the Internet,’ which gave specification to the prior basic guideline of 2011. Of note, these guidelines focus on implementation of allowable traffic management. Here, KCC allowed the mobile carriers (KT, SKT, LGU ) to charge a premium for voice service by discriminating subscribers based on the level of subscription rate, which the FCC disallows. Interestingly, the smallest mobile carrier, LGU , stated that it would allow its subscribers to access the Kakao Talk service as part of their existing data subscription plans – with no usage limitations and no additional charges – a stance that riled the larger carriers.

In this paper, we will describe the contrasting approaches and then consider what motivated these differences and the consequence they might have on the overall openness and operation of the Internet. In our initial analysis, we see that the differences between these country’s approaches are driven by differing: 1) attitudes concerning Internet liberalism 2) user expectations, 3) network histories, 4) relationships between the regulator and the carriers, 4) maturity of the application industry, 6) and competitive environments in the market. We plan to expand on this analysis through a rigorous review of the legal records and scholarly work in this area. It is our hope that this work will help international efforts to harmonize of a common understanding of Internet Openness.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Speakers

Saturday September 28, 2013 9:35am - 10:10am
GMUSL Room 225

9:35am

Broadband and Unemployment: Analysis of Cross-Sectional Data for U.S. Counties
Download Paper

This paper examines the effect of broadband availability on the unemployment rate at the county level in the United States. Using data on broadband availability from the National Broadband Map and census data on unemployment for all 3000-plus counties in the United States, this paper estimates an econometric model that that isolates the effect of broadband deployment after controlling for other factors. This paper builds on our prior work with a smaller sample that found that increasing broadband availability from the current 93% of homes to 100% will reduce unemployment by about 0.5%.



An extensive literature exists on the relationship between telecommunications, and specifically broadband, on economic activity and growth with the consensus favoring a generally positive impact (see for example, Crandall, Lehr, & Litan, 2007; Kolko, 2010). However, the evidence in favor of a positive impact on employment is much more tenuous and contradictory. On the one hand, a positive effect on job creation may be expected from the increased economic activity resulting from broadband deployment. On the other hand, broadband technologies may contribute productivity gains negating any need for additional labor. Kolko (2010) has also shown that though broadband contributes to employment generation, it may not significantly affect the unemployment rate because workers are mobile, and relocate or commute to places with higher labor demand. Atasoy (2011) on the contrary, found that broadband deployment has a significant effect on the employment rate, but the effect is stronger in counties with a higher percentage of college graduates and in industries that employ a more educated workforce. The effect of broadband availability on employment is an unresolved question, which requires further investigation. This paper seeks to analyze this question.



Past attempts at investigating the relationship between broadband and economic growth were complicated by the lack of adequate data on broadband deployment and availability. Previously accessible data on broadband availability, such as the FCC?s ZIP Code-based broadband availability data have been heavily criticized (Grubesic, 2008; Kolko, 2010; Prieger & Hu, 2008). Beard, Ford and Saba (2010) used the 2007 Computer and Internet Use Supplement of the Census Bureau?s Current Population Survey, but this data too is more than six years old and out-of-date. Thus, National Broadband Map data present an attractive opportunity to test the relationship between broadband availability and employment, despite caveats such as the incomplete information and lack of pricing data pointed out by Grubesic (2012), and the measurement errors and selection bias identified by Ford (2011).



In this paper, we test the hypothesis that better broadband availability will be associated with lower county unemployment rates. The county is chosen as the unit of analysis, since it is small enough to capture local variations but large enough to constitute a reasonable labor market. Data on broadband availability and employment are collected from the National Broadband Map and the Bureau of Labor Statistics? (BLS) Local Area Unemployment Statistics (LAUS) database, for all 3000-plus counties in the United States. We use multiple regression is then used to analyze the data. To control for the effect of education levels found by Atasoy (2011), we also include the percentages of the county workforce with a high school diploma as a control, along with average age and race. Controls are also included for the effects of changes in construction and real estate spending (a critical factor affecting unemployment rates over the last 6 years), and for changes in the statewide economy, since county unemployment rates may be responding to wider regional trends.



Results obtained for counties drawn from a sample of 8 states (N=430) representing different combinations of economic development (state per capita income above or below the national poverty line) and urbanization (percentage of state population resident in urban areas above or below the national mean) indicate that there is a negative correlation between the county unemployment rate and broadband availability. Despite the care we took to select a representative sample of states, purposive sampling may still introduce biases in the results ? including the entire population is expected to remedy this. Full results for the population of 3000 counties will be available in time for TPRC.



References:

Atasoy, H. (2011). The effects of broadband internet expansion on labor market outcomes. Accessed March 20, 2012, from SSRN: http://ssrn.com/abstract=1890709 or http://dx.doi.org/10.2139/ssrn.1890709



Beard, T. R., Ford, G. S., & Saba, R. P. (2010). Internet use and job searh. Phoenix Center Policy Bulletin No. 39. Phoenix Center: Washington, DC.



Crandall, R. W., Lehr, W., & Litan, R. E. (2007). The effects of broadband deployment on output and employment: A cross-sectional analysis of U.S. data. The Brookings Institution: Issues in Economic Policy No.5. Accessed March 2, 2012,from http://www.brookings.edu/papers/2007/06labor_crandall.aspx



Ford, G. S. (2011). Challenges in using the National Broadband Map?s data. Phoenix Center Policy Bulletin No. 27. Phoenix Center: Washington, DC.



Grubesic, T. H. (2008). Spatial data constraints: Implications for measuring broadband. Telecommunications Policy, 32(7), 490-502.



Grubesic, T. H. (2012). The U.S. National Broadband Map: Data limitations and implications. Telecommunications Policy, 36(2), 113-126.



Kolko, J. (2010). Does broadband boost local economic development? San Francisco, CA: Public Policy Institute of California. Accessed March 22, 2012, from http://www.ppic.org/main/publication.asp?i=866

Moderators
Speakers
KJ

Krishna Jayakar

Penn State University


Saturday September 28, 2013 9:35am - 10:10am
GMUSL Room 120

9:35am

It's the Definition, Stupid! Framing of Online Privacy in the Internet Governance Forum Debates
Download Paper

In the policymaking realm the framing of policy issues and problem definition have long been powerful tools in setting the agenda for decision makers and influencing the prioritization of acceptable solutions. Policy debates about online privacy are riddled with buzzwords that establish privacy as a policy domain. Some of these frames survive over time, although their meaning and interpretation change; others gain temporary visibility only to be left at the margins of the discourse; yet others disappear when new frames emerge. Over the years, and depending on venue, privacy has been framed as a matter of security, freedom of expression, human rights, and more. These discursive tokens act as boundary negotiating artifacts that delineate one policy space from another, as framing is a powerful mechanism in shaping policy debates. Frames are strategically deployed to leverage the status quo to promote one?s political agenda in policy deliberation. Taken together, the prominence and the functionality of framing make it an important factor in understanding both actual policy and its potential repercussions. Yet, frames are rarely systematically analyzed in the information policy research.

This paper offers a longitudinal perspective on online privacy policy formulation through the analysis of frames used to define and refer to privacy-related concepts in a nonbinding, multistakeholder policy deliberation environment. It focuses on the consultative settings of the Internet Governance Forum (IGF) ? a unique diplomatic experiment within the UN system and international policymaking; one that attracts a high level of participation from governments, industry, and civil society even though participation is voluntary and its decisions are non-binding. Some have labeled it as a decision-shaping rather than a decision-making forum or a forum where issues of Internet-related policy are being discursively constructed. Many topics discussed at the IGF are also prominent in the global public discourse, privacy being one of the most pivotal issues.

This study is part of a larger project looking into how policy language is getting shaped and how it influences actual policy decisions. Working with the verbatim transcripts of IGF deliberation, we combine computational linguistics, exploratory visual analytics, and discourse analysis. Specifically, we employ a visualization of selectional preference learning, a computational technique that quantifies preferential relationships among groups of words, to identify passages where definitional framing activities may be occurring. We then apply traditional discourse analysis techniques to these passages in order to map out online privacy as a policy within the multistakeholder deliberative environment of the IGF. Our goals here are tri-fold. First, we will draw a general picture of how online privacy is being framed at the IGF. Second, we will examine how that framing has evolved over time, from the first IGF in 2006 through the sixth IGF in 2011. Third, we will look into the strategic deployment of frames by different stakeholder groups and individual influential actors in this space. We anticipate our findings to raise focused questions about privacy policy deliberation in other fora, beyond IGF, and our methodological approach to be applicable to additional Internet policy domains.


Saturday September 28, 2013 9:35am - 10:10am
GMUSL Room 332

9:35am

Implications of Recent Spectrum Sharing Efforts within the U.S.
Download Paper

Recent efforts within the FCC and the NTIA focus on the opportunity of spectrum sharing as a means to open additional spectrum in a useful frequency range for wireless broadband services. A variety of technologies have made such coexistence models now feasible in ways previously difficult or impossible. This includes such concepts as geo-location databases, sensing and interference tolerance. These technologies allow for denser packing of services, hence spectrum policy can be more creative with sharing proposals and achieve increased spectrum utilization.



This type of sharing has moved from academic thought to policy decisions in several recent rulemaking efforts, including the FCC Docket 12-354 which proposes a sharing regime for the 3550-3650 MHz band. The opportunities surrounding 100 MHz of shareable spectrum that is focused on small cell applications is exciting. This band presents a unique opportunity for the FCC, NTIA, DoD, academia and industry to experiment with novel sharing techniques from both technical and administrative perspectives.



Of particular interest in this band is finding ways for commercial systems to coexist with radar systems. Therefore, it is imperative to develop appropriate sharing policy that supports both incumbent radar protection and spectrum utilization by small cell devices. One vital aspect of developing appropriate spectrum sharing policy is accurately modeling the interference potential between services. The focus of this work is modeling of the 3550-3650 MHz band spectrum sharing scheme, which entails the interaction between high powered and interference sensitive radar systems and low powered and interference tolerant small cell devices. Specifically, ship-borne radar exclusion zones were analyzed and compared to modeling work completed by the NTIA. Also, modeling of aggregate interference impacts from small cell devices to ship-borne radars was completed. The intent of this research is to demonstrate that appropriate interference modeling techniques support an increase in modeling granularity and accuracy therefore supporting more granular and informed policy making.



The methodology used in the modeling exercises employed a mix of accepted NTIA techniques as well as introducing appropriate new methods that better enable the modeling of small cell system architectures. The novel methods used in this research consist of using higher resolution propagation modeling techniques for interference potential determination as well as site-specific aggregate impact analysis of small cell devices upon radar systems.



The research found that ship-borne radar exclusion zones can be significantly reduced in ship-borne radar operation areas. In addition, it was found that there was insufficient information provided by the U.S. government in regards to radar equipment specifications to accurately model interference potential, therefore reducing the ability of the policy-maker to achieve appropriate sharing policy. Also, it was found that the ITM propagation model does not use land use data or building data along radio propagation paths, which is necessary for accurate modeling of small cell network deployment cases. Therefore, ITM is insufficient for modeling of radar and small cell system interaction studies. Correspondingly, to accurately model small cell systems in a site-specific manner, the use of higher resolution geographic data and a propagation model that can utilize this data is necessary. It was also found that small cell device loading for aggregate interference impact analysis can be accomplished through use of census, land use and building data and can be done in site-specific manner. This loading methodology can support more granular interference potential data that can better shape spectrum policy work.Finally, it was found that the advances in technology that support spectrum sharing discussed previously should not be bottlenecked by legacy interference modeling techniques when more granular methods are currently available.

Moderators
Speakers

Saturday September 28, 2013 9:35am - 10:10am
Founders Hall 111

10:10am

Assessing Access Network Market Power
Download Paper

Interconnection agreements in the telecommunications industry have always been constrained by regulation, but Internet interconnection has not received the same level of scrutiny. Recent debates regarding proposed mergers, network neutrality, Internet peering, and last mile competition have generated much discussion about whether Internet interconnection regulation is warranted.

Concerns about network market power are not limited to those network providers at the top of the Internet hierarchy. In recent years, the concerns have actually shifted from the backbone providers at the core of the Internet to the access networks at the edge. Limited access network competition can not only impact consumer pricing or increase the likelihood of harmful discrimination, it can also lead to unfair interconnection practices. Recent interconnection disputes have generated questions about whether the concentration in access networks impacts the ability of other network providers to deliver traffic to those end users. This concern is not restricted to interconnection between access networks and upstream networks. Due to the emergence of ?hyper giants? who are responsible for a large percentage of Internet traffic, it extends to the relationships between access networks and pure play CDNs as well as access networks and large content providers.

In order to determine whether regulation is necessary, policymakers need appropriate metrics to help gauge a network provider?s market power. The metrics used to assess backbone network market power may not be as relevant for access networks. Ideally, policymakers would have access to Internet interconnection agreements as these reflect a network?s bargaining power. Unfortunately these are not typically publicly available. Policymakers must instead rely on proxy metrics and inferred interconnection relationships. This paper proposes a new metric, access variance, for assessing access network market power. This metric is defined as the diversity of feasible paths into an access network and can provide a useful assessment of access network?s ability to control access to its end users.

Content delivery networks (CDNs) may provide some visibility into an access network's access variance. A CDN will typically distribute requests from an access network's end users across multiple deployments depending on factors such as performance and cost. While some of these deployments may be located within the access network, other deployments may be in upstream networks that have interconnection agreements with the access network. This access variance can help ascertain the diversity of access methods other than establishing an interconnection agreement directly with the access network.

The paper provides an empirical example analyzing whether a CDN has alternative options for reaching an access network?s end users. The example focuses on France as the French ISPs have publicly stated their intention to limit CDN access if they are not compensated according to their terms. By analyzing the CDN traffic flows in France, we can determine the extent of the French ISPs? ability to restrict access. If a CDN is either unable to deliver content to an access network?s end users or is only able to deliver content to those users from its deployments within the access network, this may suggest that the access network is able to control access to its subscribers. The analysis involves identifying the DNS nameservers associated with the French ISPs and issuing requests for CDN content from those nameservers. The resulting server IP addresses are recorded and analyzed to map the CDN deployments responding to those requests. In this analysis, the CDN deployments within the access networks served less than 50% of the requests originating from the access networks? on-network nameservers. This suggests that there are viable alternatives to establishing interconnection agreements directly with the access networks.

Moderators
Speakers

Saturday September 28, 2013 10:10am - 10:45am
GMUSL Room 221

10:10am

The Hardest Cases of Broadband Policy: Native Telecommunications, Captive States, and Policy Entrainment
Download Paper

Providing telecommunications service and promoting adoption on Native American lands remains one of the hardest problems in telecommunications policy (e.g., FCC 2011). Yet a notable success story for the diffusion of broadband Internet has been the Tribal Digital Village (TDV), a solar wireless Internet distribution network that serves 19 federally-recognized Native American Reservations in Southern California. On the most impoverished of these reservations, residents live without phones, paved roads, or constant electrical power and the effective poverty rate is 100%. Participating reservations are often located in extremely mountainous, remote areas. On a few of these, residents now have TDV broadband available and use it every day, a remarkable feat given the obstacles involved (Srinivasan 2006, 2007; Sawhney & Suri, 2008; McGrath 2011; Sandvig 2012).

This paper will examine TDV as a broadband policy case study in order to understand the nuances of subsidized broadband in situ. Founded in 2000, TDV is a governmental ISP that has grown from one location to operate in a rectangle about 175 miles by 75 miles. It spans the area from the US-Mexico border into Riverside County with over 90 miles of wireless backbone (point-to-point) links operating on solar power at a minimum of 45 Mbit/sec. It serves about 2,000 users. It is inconceivable that a high-speed Internet project designed exclusively to serve people living in poverty in the mountains would be possible without subsidies, and the TDV has been heavily subsidized. (It received private philanthropy, Universal Service Fund support, National Science Foundation funding, cross-subsidy from Casino profits, BTOP funding, and more.) Service and home computers were initially free to all users, but after a decade TDV transitioned to a paid monthly service fee as is common among small commercial ISPs.

This paper will present material from a qualitative ethnography of infrastructure (after Star 1999) about the TDV. This study could also be conceptualized as longitudinal, embedded, single-case design selected for atypicality (Yin 2003). The findings to be presented come from five TDV site visits and four off-site interviews from 2004-2013 ranging from 1-7 days each. These produced over 380 audio and video recordings and over 1,000 photographs of equipment, users, engineers, and officials. In addition, researchers analyzed documents, budgets, grant proposals, and news coverage about the TDV. Few other recent research projects have a similar longitudinal perspective on native broadband and no researcher studying the TDV has acquired access to a comparable corpus of documents.

Prior work (op. cit.) identified the success of TDV as resulting from massive subsidy, proximity to a major research university (UCSD), and a complex of difficult-to-reproduce circumstances. This paper will examine the TDV?s transition away from free service and its effort to introduce other sources of revenue beyond subsidy--essentially a major model shift from collectivism to capitalism. This analysis allows for reflection on rural telecommunications, broadband policy, the digital divide, spectrum allocation, and a number of other fundamental, overarching telecommunications topics. However, elaborating theories of ?captive states? (Black, 1988) this study also uses the TDV to demonstrate the important particularity of each native telecommunications project. It argues that small sovereign states like The Ewiiaapaayp Band of Kumeyaay Indians or The Santa Ysabel Band of Diegueño Indians participate in a policy process termed ?entrainment? (where in geology, a small particle is moved by a large fluid flow yet remains distinct from it). The TDV and other native policy projects are drawn along in the stream of US policy, yet an overriding policy goal in Native telecommunications is often to remain distinctive (e.g., Michaels, 1994). As this goal is often implicit, the consequences are often unexpected and counter-intuitive.

Moderators
HH

Heather Hudson

U of Alaska Anchorage

Speakers

Saturday September 28, 2013 10:10am - 10:45am
GMUSL Room 225

10:10am

The Impact of High-Speed Broadband Availability on Real Estate Values: Evidence from United States Property Markets
Download Paper

There are numerous factors that are known to have an impact of the value of residential real estate, including for example the energy efficiency of the building, the proximity of good schools, or the amount of crime in the neighborhood. This paper presents an empirical study of the impact of access to high-speed Internet on real estate values. In our research, we explore whether people are willing to pay a larger amount of money for real estate located in areas where high-speed broadband is available, than for a property that does not offer this amenity. We use unique data on broadband availability from the National Broadband Map for the years of 2010-2012, and combine this dataset with public information on residential deed transfers for the same time period. Using a hedonic price framework, we investigate why real constant-quality real estate prices vary, where we define a constant-quality real estate as a residential property where structural, land, and community attributes are all held constant.

Estimating the value of high-speed internet availability through property markets creates challenges for such an empirical research. On one hand, residential properties in markets with high-speed broadband access would be expected to have greater value but, on the other hand, good quality broadband infrastructure is also expected to be rolled out first in high income areas with high real estate values. To separate these two effects, we use an econometric model and control for the unobserved consumer demand effects that are jointly positively correlated with residential real estate prices and high speed Internet roll out. Included in the paper are also tests of hypotheses derived from spatial econometric theory and the urban amenity literature.

To demonstrate how academic property valuation theories work in practice, we discuss several examples based on the combined master data set. In particular, we empirically observe differences in value across regional markets; the changes in the value of broadband access over time; and analyze the potential implications for the policymakers. Our research differs from previous studies in that the primary focus of our analysis is the impact of high-speed broadband on the residential property values. The paper adds to the existing literature by conducting an empirical analysis of these impacts, with the ultimate objective of measuring the value of broadband Internet through real estate markets across the US. The outcome of this work could offer valuable contributions for both fields of telecommunications policy and property valuation research.

Moderators
Speakers
avatar for Gabor Molmar

Gabor Molmar

Interdisciplinary Telecommunications Program, University of Colorado Boulder


Saturday September 28, 2013 10:10am - 10:45am
GMUSL Room 120

10:10am

Violating Your Privacy: An Economist's Perspective
Download Paper

Concerns about privacy are growing. A right to privacy is in part a constitutional right. However, there are also important economic implications to the fair redress and enforcement of that right. Admittedly, not everything of value can be measured in dollars and cents and courts have found that monetary compensation is not sufficient for violations of constitutional rights, such as free speech. Nevertheless, a better understanding of the economic values associated with privacy, and its violation, can inform the current policy debate in at least two ways. Narrowly, violations of privacy that cause direct economic harm may need to be compensated. Broadly, a better understanding of the economic market failures associated with privacy can help inform policies that attempt to create the right economic price signals to guide private decision making when it concerns privacy.

The economic harms to individuals who have their privacy violated fall into at least two categories. First, some violations of privacy lead to direct economic harms. This is the type of harm, for example, that occurs from identify theft ? someone gains access to your private information and that allows them to create liabilities in your name. Second, while not always economic costs, some privacy violations create value that is not shared with the individuals whose information creates the value. This is the so-called Big Data phenomenon where aggregations of data are more valuable than their component parts. The economic flows associated with this type of transaction can be further complicated when the data are collected from use of a ?free? product. The first part of this analysis will review literature and characterize issues with estimating both of these types of value. For example, the Big Data problem has many characteristics similar to allocation of common costs or apportioning joint value creation as well as issues related to two-sided markets ? topics economists have studied extensively.

The broader economic issues reach beyond the value of data about individuals to those individuals and concern the externalities of costs and benefits to others. Better understanding these externalities is urgent as institutions around privacy are developed and policy is codified in legislation. These costs and benefits can be divided between those that directly impact other economic actors (e.g., firms, data aggregators and researchers) and those that concern society as a whole (e.g., so-called social benefits of Big Data or protection of constitutional rights.) The first type of externality would be the type imposed on a company that possesses protected data from individuals. While this data might be necessary for a firm to provide appropriate services to their customers, the firm incurs costs to protecting the data. Significantly burdensome protection requirements would impose substantial negative externalities on the firm protecting the data, possibly leading to under provision of the firm?s goods or services. The second type of externality concerns the potential for research and analysis for the greater good, as many proponents of Big Data would argue. This is essentially a positive social externality. In the face of such social externalities, economic inefficiencies arise, suggesting the potential for some policy intervention. Understanding underlying economic consequences of protecting, or violating, privacy can help guide policies that enable the correct price signals for private actors to inform the proper level of privacy protection. (Of course, punitive damages for violations of fundamental rights are also intended to incentivize behavior, although with possibly blunter signals.)

In addition to characterizing and organizing the different types of economic flows associated with various dimensions of privacy, this paper will provide illustrative examples of estimating select economic values.

Moderators
Speakers
avatar for Coleman Bazelon

Coleman Bazelon

Principal, The Brattle Group
GM

Giulia McHenry

Associate, The Brattle Group


Saturday September 28, 2013 10:10am - 10:45am
GMUSL Room 332

10:10am

An Exploration into Spectrum Policy Debates on Social Media
Download Paper

Spectrum policy debates are generally divided between advocates for more robust property rights that would allow Coasian bargaining and advocates for spectrum commons that would permit more unlicensed applications. As recent debates about the upcoming broadcast spectrum incentive auctions indicate, there is also basic disagreement about what the Federal Communications Commission?s role should be in managing spectrum for mobile wireless devices. Some advocates seek enhanced regulation that would implement bright-line limits of spectrum holdings and/or differentiate between spectrum bands in evaluations of spectrum aggregation, e.g., a separate limit for sub-1 GHz holdings, while others argue that the FCC should largely refrain from limiting spectrum aggregation across spectrum bands by wireless carriers.

These debates are often seen as contests between the regulated and the regulator, i.e., wireless carriers and their organizations against the FCC, with a repetitive dynamic: the regulator seeks to implement its statutory mission by means of further regulation while the players in the industry resist constraints on their ability to act, or alternatively, seek regulation that furthers particular interests, e.g., regulation that benefits the dominant players, while disfavoring smaller competitors and new entrants.

Interestingly, contemporary spectrum policy debates involve a broader set of actors than the regulators and the regulated. Today, spectrum policy advocates include numerous organizations and individuals that are not directly regulated, but believe they have a stake in spectrum policy. These advocates are distinct in their increased use and engagement with social media and new forms of Internet based advocacy instead of traditional processes, e.g., filing comments with the FCC at appropriate times in response to proposed rule makings. New forms of advocacy allow for highly technical spectrum policy debates to engage large audiences, which encourages further participation by non-traditional actors. Despite its seemingly abstract and technical nature, spectrum policy is being debated in real time on Twitter on Facebook around the world and occupying the same space as commentary about Justin Bieber and Lady Gaga.

This paper utilizes new tools for analyzing social media postings to understand the scope and contours of spectrum policy debates happening on social media. Raw Twitter and Facebook feed data is collected and analyzed using a variety of computational methods, including event detection, sentiment analysis, informational analysis and trending topic analysis. Visualization tools are also used to provide insight into the underlying data. These insights and analysis are then used to chart key events that lead to bursts of online activity; to map the geographic distribution of contributions, including international debates; and to synthesize recurring themes and memes.

Based upon these empirical results, this paper seeks to engage academic debates regarding the contemporary politics of spectrum policy; regulatory agenda setting; the effect of online social media on policy debates, particularly regarding highly technical issues and resource allocation; and the efficacy of certain types of Internet engagement, including the social media efforts of the FCC and other established actors in this sphere.

Social media spectrum policy debates suggest that the Internet is not just broadening political participation, but also increasing the depth of that participation into highly technical arenas. As more spectrum becomes available for mobile broadband and increased adoption follows, the new politics of spectrum may increasingly be debated on social media by an increasingly diverse set of actors.


Saturday September 28, 2013 10:10am - 10:45am
Founders Hall 111

10:40am

Coffee Break
Saturday September 28, 2013 10:40am - 11:10am
Atrium

11:10am

Lemons on the Edge of the Internet: Technological Convergence and Misleading Advertising in the Provision of Internet Access Services
Download Paper


This article investigates the causes, consequences, and possible remedies to the problem of misleading advertising in the provision of broadband Internet access services. Network performance measurements from OECD countries between 2007 and 2012 document a significant increase in the variability of broadband infrastructure quality, which helps explain growing demand for technologies and policies that counteract information asymmetries between network operators and end users. A cross-country analysis documents the negative association between quality uncertainty and variations in digital infrastructure quality. The analysis suggests public policies and business models that promote market transparency can enhance the efficiency of the broadband access market on the edge of the Internet and stimulate incentives for the diffusion of next generation platforms. 


 

Moderators
avatar for Prof. Terry Flew

Prof. Terry Flew

Assistant Dean (Research and International Engagement), Creative Industries Faculty, QUT
Terry Flew is an international recognised leader in media and communications, with research interests in digital media, global media, media policy, creative industries, media economics, and the future of journalism. | | He is the author of Australia’s leading new media textbook, New Media: An Introduction, which has sold over 15,000 copies over four editions (2002, 2005, 2008, 2014). He is also the author of Global Creative Industries... Read More →

Speakers

Saturday September 28, 2013 11:10am - 11:45am
GMUSL Room 221

11:10am

Independent, Local Broadband and Business Performance: A Multiple Case Study
Download Paper

Prior research has supported the notion that affordable, business-class broadband access is critical for businesses to thrive well into the 21st Century. For places that are still unserved and underserved by broadband providers, some local conglomerates, often spearheaded by local governments, have taken it upon themselves to provide broadband access through a municipal or public-private partnership model. A question remains as to whether these independent models are primarily used by specific types of businesses, or associated with certain business-related internet activities (including non-use) and self-reported improvements in business performance. Our previous research has already shown that differences exist in terms of business user satisfaction between these independent models and more mainstream, national broadband providers. Local businesses who used mainstream internet providers reported mixed satisfaction vis-à-vis businesses using independent local providers, even though research on residential users showed satisfaction scores for local providers as being uniformly higher than their mainstream counterparts. (Fortunato et al. 2013, under review).

This paper uses primary survey research of local businesses of various sizes from a six-site multiple case study from Maine, Pennsylvania, and Wisconsin to examine the relationship between use of a local, independent provider and business performance, when compared to businesses using mainstream providers, and those who do not use broadband at all. The analysis examines whether the choice of a mainstream or local provider has also influenced business activities, and self-reported business performance, on several metrics such as increased sales and reduced operational costs. Other control variables, such as the length of time in business, size of business, and number of employees telecommuting or working remotely via internet will also be examined. Additionally, the paper examines differences in attitudes about broadband use in business, such as the importance of reliable, affordable broadband to succeed, across users of mainstream and independent local services, and those businesses that do not use broadband at all. While the implications of the digital divide are well-understood, this paper attempts to uncover associations that may suggest the impact of local, independent broadband delivery on overall local business effectiveness. This case study is not generalized to all business broadband users, but the paper aims to fill an important gap in the research where a more complete understanding of the impacts of local broadband development is required. We then detail the resultant implications of the study?s findings with regard to local, state, and federal economic and business development policy in an attempt to understand the value of independent broadband development compared to mainstream service proliferation, and whether policy could ? or should ? broaden its focus on independent service provision.

Reference:

Fortunato, M.W.P., Alter, T.R., Sterner, G.E. III, McPhail, L.G. and Schwarz, M.R. (2013). "Imperative Opportunity: The Risks and Rewards of Independent Local Broadband Development." Economic Development Quarterly, Under Review.


Saturday September 28, 2013 11:10am - 11:45am
GMUSL Room 225

11:10am

Internet Policy’s Next Frontier: Usage-Based Broadband Pricing
Download Paper

Usage-based pricing has rapidly become one of the most controversial topics in Internet policy. Both wired and wireless broadband providers are migrating from flat-rate pricing to a variety of consumption-based pricing models. Some consumer groups have viewed the change to usage-based pricing with skepticism, fearing it will usher in an era of higher prices, deteriorating service, and increasingly anticompetitive conduct.

This article evaluates the merits of data caps, tiered-service plans, and other usage-based pricing strategies. It finds that usage-based broadband pricing is not inherently anti-consumer or anti-competitive. Rather, it reflects a cluster of pricing strategies through which a broadband company might recover its costs and fund future investment. Under a flat-rate plan, lighter users cross-subsidize heavier users. By aligning costs with broadband use, usage-based pricing shifts more network costs onto those who use the network the most.

Critics claim that usage-based pricing is unfair, because the marginal costs of data transport are low, and therefore heavier users do not cost more than lighters. But this argument ignores the significant fixed costs of building and maintaining a network. The central challenge for broadband pricing is allocating those fixed costs across the customer base. Unlimited flat-rate pricing is one strategy, but not necessarily the most efficient. Usage-based pricing is a form of price discrimination that allows broadband providers to recover more of their fixed costs from customers with relatively inelastic demand. This strategy more closely approximates Ramsey pricing and therefore may be a more efficient way of recovering fixed costs without distorting consumer preferences.

Pricing experimentation may also help narrow the digital divide. By recovering more fixed costs from heavier users, firms may have more freedom to extend service at a lower rate to light users who are unable or unwilling to pay the unlimited flat rate. There is evidence that these opportunities are beginning to emerge from companies engaged in usage-based pricing.

Usage-based pricing may also help alleviate network congestion, though the case is less clear. Unlimited flat-rate pricing encourages overconsumption of network resources, which can create congestion during peak periods. Usage-based pricing encourages consumers to use less bandwidth and to demand more bandwidth-efficient practices by Internet content and service providers. But it is not clear that current pricing strategies will help mitigate peak-time congestion. To use pricing to shift consumption to off-peak periods, companies would need to show peak times are predictable. Fixed networks show regular peak times, but congestion is not currently a significant problem for fixed providers. On the wireless side, congestion is problematic but peak times are less predictable. With current technology, usage-based pricing may reduce aggregate demand and help chronically over-congested networks, but firms likely cannot yet use differentiated pricing to reduce peak-time congestion levels.

Unquestionably, some vertically-integrated broadband providers may use usage-based pricing anticompetitively to shield cable affiliates from Internet-based video competition. While these risks exist, the literature suggests that vertical restraints on trade can be pro-competitive or anticompetitive. Regulators should remain vigilant with regard to potentially anticompetitive conduct, but should intervene only when a firm exploits market power in a way that causes actual consumer harm. Regulators should also assure that broadband providers have transparent pricing practices and give consumers tools to estimate monthly data use.

Only through experimentation and empirical measurement will providers find the optimal pricing solution ? which, by network, may vary dramatically. Thus far, regulators have correctly rejected the call to interfere with this pricing flexibility, absent a demonstration of market failure and consumer harm. This study shows why they would be wise to continue doing so.

A working paper version was published by the Mercatus Center in October 2012.

Moderators
Speakers

Saturday September 28, 2013 11:10am - 11:45am
GMUSL Room 120

11:10am

11:10am

Policy Framework for Spectrum Management in Emerging Economies: Lessons from Mobile Broadband Spectrum Allocations in India and the Philippines
Download Paper

Given the sparse wire line/cable infrastructure in most emerging economies, mobile broadband is likely to become the mode of broadband access for most people in these countries, especially given the near ubiquity of mobile networks.

Spectrum allocations for 2G mobile services in many emerging economies happened in the context of weak policy and regulatory institutional infrastructure, using allocation mechanisms that were often not transparent. However, since then, spectrum auctions have gained greater acceptance as an instrument for allocation. Several countries have also adopted more market oriented mechanisms such as refarming (India, Philippines, Sri Lanka) and some have introduced trading (Gautemala).

Despite adopting newer methods of spectrum management, deployment of mobile broadband is constrained as many emerging economies have to contend with the legacy issues of 2G allocations, as several bands for mobile broadband are those used for 2G as well (900 and 1800, 1900 MHz). Some of these issues relate to licenses being technology and service specific, no specific policy for refarmimg or inadequate incentives for incumbents to move to newer/other spectrum bands. On the other hand, developed countries such as USA and UK are able to design and implement new instruments such as "incentive auctions" and spectrum trading respectively, not only facilitating market mechanisms to prevail, but also showing agility and ability to adopt new ideas in the regulatory and policy bodies. Such aspects are often lacking in an emerging economy context.

This paper examines the context of spectrum allocation for mobile broadband in India and Philippines. Besides established regulatory institutions - Telecom Regulatory Authority of India (TRAI) and the National Telecommunications Commission (NTC) respectively, the courts and the media have had a role to play in this allocation. Both countries also are dealing with the legacy allocations. For example, in India, the Supreme Court had cancelled 122 2G licenses and directed TRAI to auction the spectrum so released. But TRAI withheld some spectrum, citing future requirements of refarming for mobile broadband. The Supreme Court reiterated that all spectrum related to cancelled licenses had to be auctioned. Similarly, in Philippines, there were cases in the court relating to allocation of 3G spectrum and reallocation of spectrum from broadcasting to BWA services by NTC.

This paper adopts the case based approach advocated by Eisenhardt (1989) and Yin (2003) to generalize from the selected mobile broadband allocation context in India and Philippines to delineate the role of the regulatory and policy making agencies, judiciary and other institutions in spectrum management. We used primary and secondary sources of data for developing the cases. Primary sources of data include interviews with key decision makers in TRAI and the operators. These were supplemented by secondary data such as court judgments, media reports and journal articles.

Based on this, we develop a framework for spectrum management both in terms of the institutional structure and market mechanisms that would enable developing countries to manage spectrum and develop more market based and innovative strategies.

In relation to the increasing gap in broadband deployment and adoption between emerging economies and developed countries, it is important to review the policy and regulatory frameworks in emerging economies that could accelerate broadband growth. This paper contributes by contextualizing institutional roles and instruments in emerging economies in spectrum management, and in addition, by doing a comparative study, we believe this study makes a unique contribution, especially as there a few studies based in emerging economies.

Moderators
Speakers
R

Rekha

Executive chair, Telecom Centre of excellence, Indian Institute of Management Ahmedabad
Broadband, Internet Governance, Spectrum Auctions


Saturday September 28, 2013 11:10am - 11:45am
Founders Hall 111

11:45am

Take Your Phone Number with You! Explaining the Diffusion of Number Portability Policy Across Nations
Download Paper

Out of around 200 countries in the world, only 75 have number portability. What explains the speed and pattern of diffusion of this regulatory issue? Research on policy diffusion offers several explanations: constructivist, coercion, competition, and learning. Constructivists argue that countries adopt a policy, sometimes even before they are ready, in order to appear modern and forward-looking. Coercive explanations argue that countries adopt policies because they are forced to through bilateral or multilateral agreements, for example. Competition explanations suggest that countries adopt policies in order to make them more comparatively attractive, to foreign investors, for example. Finally, the learning explanation suggests that governments? beliefs about policies change over time. They learn when observing other countries implement a policy and monitoring its effectiveness. All of these explanations apply to diffusion of some policies internationally. The challenge is to understand which explanations apply more aptly under what kind of conditions and for what kinds of policies.

To see which of these explanations applies to the global diffusion of number portability regulation, and through this single issue a view to the diffusion of communications regulation more broadly, first, I have collected data on number portability, the start of fixed line phone competition, and the start of mobile phone competition in countries around the world. This global data set, centered on three regulatory issues, provides some clues as to the pace and pattern of regulatory diffusion in the communications arena.

Second, I have collected qualitative information on number portability discussions in five regional organizations ? Asia-Pacific Economic Cooperation (APEC), Inter-American Telecommunications Commission under the Organization of American States (CITEL), European Union, Economic Community of West African States (ECOWAS), and Common Market for Eastern and Southern Africa (COMESA). These documents show which and when countries were interested in number portability. These qualitative data reveal patterns and links not evident in the global quantitative datasets.

Finally, the paper concludes that examining regulatory diffusion through the lens of number portability suggests that Asia, Americas, and Europe are the three regions that innovate first; Middle East and Africa follow later on. Further, that in Asia and Americas, certain countries are pioneers, while others wait to see results before proceeding; learning appears to explain the diffusion pattern in these regions. In contrast , in Europe, regulatory diffusion begins early and proceeds rapidly, without the lag time observed in Asia and Americas, very likely because of the leadership and enforcement powers of the European Union, a coercive explanation among member states and a competitive one among non-member states. The data also make it possible to identify which countries often lead in regulatory innovation; conclusions that can be tested as more data on regulatory diffusion is collected for other communications issues. This is usable knowledge that can be applied to following the current diffusion of regulatory innovations across the world.

Moderators
avatar for Prof. Terry Flew

Prof. Terry Flew

Assistant Dean (Research and International Engagement), Creative Industries Faculty, QUT
Terry Flew is an international recognised leader in media and communications, with research interests in digital media, global media, media policy, creative industries, media economics, and the future of journalism. | | He is the author of Australia’s leading new media textbook, New Media: An Introduction, which has sold over 15,000 copies over four editions (2002, 2005, 2008, 2014). He is also the author of Global Creative Industries... Read More →

Speakers
I

Irene

FCC
Senior Analyst, International Bureau, FCC. Adjunct professor, Georgetown University. Author of forthcoming book, "Forging trust: how network technology changes politics," Johns Hopkins University Press, which investigates how both activists and governments around the world exploit the latest technology, from the telegraph to social media.


Saturday September 28, 2013 11:45am - 12:20pm
GMUSL Room 221

11:45am

Impacts of the Broadband Telecommunication Opportunities Program in Michigan Urban Communities
Download Paper

One of the pillars of the Broadband Telecommunication Opportunities Program (BTOP) was expanding broadband utilization in underserved communities through the development of library-based public computing centers (PCCs) and educational outreach. This paper reports the results of two waves of surveys of 400 participants each in spring 2011, before the PCCs were implemented, and fall 2012, at the end of the program. The surveys were conducted in urban communities in Michigan served by libraries participating in a $6 million BTOP grant project to upgrade their public Internet resources. The surveys tracked perceptions of broadband services and their utilization in public libraries, residences, and other community locations.

This study focuses on understanding how the utilization of library computers by community members impacted broadband adoption and utilization for upward mobility through education, work experience, and access to technology. Results indicated no changes between our two time-periods with respect to increased broadband awareness, home Internet access through either a computer or smartphone, or high speed home fixed-line broadband. Home fixed-line broadband access was more likely in households with higher incomes, a willingness to pay higher prices, and younger members than the rest of the sample.

We discovered clear indications of who considered broadband access and library use the most beneficial. African-Americans and male participants were more likely to indicate an intention to adopt broadband as a means of starting a small business and doing work at home than white or female participants. Gender had no effect on taking courses online, but African-Americans were more likely than whites to see potential in taking online courses over broadband networks. We also found that older participants and participants from Wave 2 were less likely than younger and Wave 1 participants to show intention to take online courses. Further, race, income, and self-efficacy (i.e., belief in one?s ability to use the Internet successfully) interacted to influence the nature of broadband use. Participants who were young, African-Americans, high self-efficacy and low household incomes were more likely to look for employment outside their home city using broadband than participants who were older, white, high income and low self-efficacy. Young African-Americans with high self-efficacy were also more likely than white and low self-efficacy participants to state that the ability to take online courses was an important benefit of broadband Internet.

Do library-based PCC facilities promote broadband utilization in urban settings? When controlling for age and point in time, library Internet use was more likely among African-Americans, low-income, and educated participants than white, high income, and less educated participants. PCC participants, defined as participants who reported using PCC facilities, were more likely to be younger, African-American, female, and have higher intentions to adopt broadband than non-PCC participants. Further, PCC participants were more likely than non-PCC participants to have intentions to use broadband to work at home and take courses online. Overall, 33.4% percent of the sample in Wave 2 reported using computers in their local library. Among these, 29.5% percent said they had noticed an improvement in PCC facilities.

Our data suggest that, while broadband adoption intentions and home use of Internet connections were unaffected by the PCC intervention, there are ways in which other types of Internet resources are being accessed by marginalized people. Future policy considerations from these analyses suggest that library Internet services can benefit minority, low-income individuals who have also participated in more years of education, who have intentions to utilize these resources for online courses, business endeavors, and employment. We conclude, based on the above evidence, that library Internet access, use and further developments of resources like PCCs will be crucial for future digital literacy programs.

Moderators
Speakers
avatar for Johannes M. Bauer

Johannes M. Bauer

Professor and Chairperson, Michigan State University
I am a researcher, writer and teacher interested in the digital economy, its governance as a complex adaptive systems, and the effects of the wide diffusion of mediated communications on society. Much of my work is international and comparative in scope. Therefore, I have great interest in policies adopted elsewhere and the experience with different models of governance. However, I am most passionate about discussing the human condition more... Read More →


Saturday September 28, 2013 11:45am - 12:20pm
GMUSL Room 225

11:45am

Nonlinear Pricing: Self-Selecting Tariffs and Regulation
Download paper

During the late 70’s and early 80’s the series of articles on nonlinear/multi-part tariffs explored the efficacy in the welfare effects of these prices. They found that multi-part prices could improve consumers’ surplus or welfare. In particular, non-linear prices can benefit each and every individual consumer as well as consumers in the aggregate (Willig 1978). 

During the same period, articles appeared on self-selecting tariffs where the consumer picks from a menu of prices to select a pricing plan. One of the principal finding of these studies on self-selecting or optional tariffs was that individual consumers generally purchased a plan which was more expensive than they needed. In other words, consumers paid for service above their needs, and paid more than they would have if they had knowledge of perfect information about their future usage. One critical assumption of neoclassical economics is that consumers have perfect information regarding prices and usage of products. Observation of consumers’ behavior with self-selecting tariffs belies that assumption. 

Today, more than ever in the Information and Communications Technology (ICT) sector, we have a variety of self-selecting packages of plans from which to choose. One must select among the various plans of cellular phone packages, broadband services, and mobile wireless devices “hot spot.” What broadband plans for DSL service, how many minutes for cellular service, what level of use for wireless data, etc. However, with the push for “competition” and deregulation, the ICT oligopolies have not been subject to price controls. Indeed, pricing regulation of these firms has been neglected (Alleman and Rappoport 2005). 

This paper explores the relationship between these various areas and suggests how regulators should insert their influence in policy making. Specifically, regulators should require ICT firms to be more transparent regarding their optional pricing packages. Indeed we propose that ICT firms should be required to bill their consumers the “best” price structure for their usage ex post, and not require consumers to select a package ex ante. This pricing policy would allow the society to reap the saving and welfare benefits of nonlinear pricing.

 


Speakers
avatar for James Alleman

James Alleman

Professor Emeritus, University of Colorado – Boulder


Saturday September 28, 2013 11:45am - 12:20pm
GMUSL Room 120

11:45am

Why Are College Students Easily Targeted?: The Enforcement of the Graduated Response Policy on Campus
Download Paper

Copyright law enforcement is a matter of balancing: balancing different legal rights and interests of creators (or right holders) and users, enforcement costs and benefits, and benefits from the prevention of using infringing material and costs of the deterrence (or termination) of potential fair use. The last balancing concern is at the core of the copyright and copyleft debate. This study intends to relate that general debate to the question of equal treatment of infringers by examining the current implementation of graduated response policy (also known as ?three strikes? policy).

Individual copyright infringers vary, ranging from students to corporate workers. It is possible to assume that some people may suffer more from losing access to copyrighted material than others, and that some uses of copyrighted works are more likely to fall under fair use than other uses. Despite the individual differences, it is expected that copyright law is applied to every individual in an equal and nondiscriminatory manner. However, is every user of copyrighted works subject to the same amount of probability of punishment? Isn?t there a possibility that some people are more vulnerable to copyright enforcement than others?

This study stems from the concern that college students are more exposed to copyright enforcement, including the graduated response policy and pre-litigation settlement, than other categories of users due to the current digital copyright enforcement system. Although a considerable portion of their use is educational and in turn falls into fair use, it is more likely that they get caught and punished for copyright infringement. Many universities in the United States, in practice, use graduated response schemes and can effectively control copyright infringing activities of their students by blocking their access to the Internet through the university network. Since college students heavily rely on the Internet network provided by their universities, the termination of Internet access could result in serious impediments to their academic and social activities, as well as restricting their freedom of expression.

For these reasons, this study aims to identify possible problems of the enforcement of the graduated response policy by universities. It will investigate the number of students disconnected from the Internet as a result of the university?s copyright enforcement and compare the number to that of users disconnected by commercial Internet Service Providers (ISPs) including Comcast and Cablevision. In addition, as criticized in prior studies on graduated response schemes (e.g., Haber, 2011; Suzor & Fitzgerald, 2011), the lack of means that can prevent and remedy false accusations of copyright infringement is the main problem of the graduated response policy. Thus, it will be examined whether universities provide any protection for their students who were falsely accused and have power to stop proceeding against the unreasonably accused students.

The empirical data for this study is collected mainly through archival research and by interviewing university personnel taking charge of copyright issues on the university network. This study is meaningful in that it will alert society to substantial and procedural problems of a type of copyright enforcement and suggest ways to redress the problems and to achieve more balancing and effective copyright systems.

References
Haber, E. (2011). The French Revolution 2.0: Copyright and the Three Strikes Policy. Harvard Journal of Sports & Entertainment Law, 2(2), 297-339.
Suzor, N., & Fitzgerald, B. (2011). The Legitimacy of Graduated Response Schemes in Copyright Law. University of New South Wales Law Journal, 34(1), 1-40.


Saturday September 28, 2013 11:45am - 12:20pm
GMUSL Room 332

11:45am

12:20pm

The Experts in the Crowd: The Role of Reputable Investors in a Crowdfunding Market

Moderators
avatar for Prof. Terry Flew

Prof. Terry Flew

Assistant Dean (Research and International Engagement), Creative Industries Faculty, QUT
Terry Flew is an international recognised leader in media and communications, with research interests in digital media, global media, media policy, creative industries, media economics, and the future of journalism. | | He is the author of Australia’s leading new media textbook, New Media: An Introduction, which has sold over 15,000 copies over four editions (2002, 2005, 2008, 2014). He is also the author of Global Creative Industries... Read More →

Speakers

Saturday September 28, 2013 12:20pm - 12:50pm
GMUSL Room 221

12:20pm

The Rise of Quasi-Common Carriers and Conduit Convergence
Download Paper

The technologies that deliver content to consumers have begun to converge into a single Internet-driven conduit. Such convergence supports a consolidation of previously stand alone markets as evidenced by the ability of ventures to offer a “triple-play” bundle of Internet-delivered video, data and telephone service. Converging technologies and markets eliminate a sharp and identifiable distinction between the service classifications created by Congress and applied by the Federal Communications Commission (“FCC”). The Commission faces a regulatory quandary in maintaining a clear regulatory dichotomy between carriers operating as private conduits versus carriers subject to government oversight. The former can deliver content, software and services largely free of government regulation while some in the latter category operate as common carriers bearing public utility obligations, while others incur FCC-mandates to carry video content and place it on particular channel locations. 

This paper will examine whether and how converging technologies and markets provide an opportunity for the FCC to impose more forms of quasi-common carrier duties on ventures that otherwise would qualify for limited or no regulation. The paper will examine a recent court affirmance of the FCC’s requirement that all cellphone companies provide subscribers of other carriers “roaming” access to data services, despite the classification of Internet access as a largely unregulated information service. The paper also will examine previous instances where courts have affirmed FCC decisions to impose quasi-common carrier duties, such as the mandatory carriage of local broadcast television signals. 

The paper shows how the FCC has found ways to impose quasi-common carrier duties on ventures providing convergent services even though these ventures appear to qualify for little, if any, regulation. The FCC may have devised ways to respond to changed circumstances and the rigidness of congressionally crafted service definitions. However such flexibility generates regulatory uncertainty and the potential for the Commission to exceed its statutory authority. The paper concludes that the FCC will consider applying quasi-common carrier duties on private carriers without certainty about the permissible reach of this option.

Moderators
Speakers
avatar for Rob Frieden

Rob Frieden

Pioneers Chair and Professor of Telecommunications and Law, Penn State University
Rob Frieden holds the Pioneers Chair and serves as Professor of Telecommunications and Law at Penn State University. He has written over seventy articles in academic journals and several books, most recently Winning the Silicon Sweepstakes: Can the United States Compete in Global Telecommunications, published by Yale University Press. | | Before accepting an academic appointment, Professor Frieden held senior U.S. government policy making... Read More →


Saturday September 28, 2013 12:20pm - 12:50pm
GMUSL Room 225

12:20pm

How Do ISP Data Caps Affect Subscribers?
Download Paper

It has become common for ISPs to place caps on the monthly usage of cellular and broadband data plans. ISPs commonly claim that caps benefit most users. They cite statistics that show that a small percentage of users consume a high percentage of network capacity, and claim that flat-rate pricing is unfair to the majority of users. They claim that caps affect only heavy users, result in lower tier prices, and increase the incentive for ISPs to add capacity to the network.

In contrast, many public interest groups claim that caps hurt most users. They claim that caps discourage the use of certain applications and encourage a climate of scarcity. They claim that caps and their corresponding overage fees do not correspond to the cost for network capacity, and that their use may decrease an ISP?s incentive to add capacity.

There is a vigorous debate over the use of caps. A Senate bill would require the FCC to evaluate data caps to determine whether they reasonably limit network congestion without unnecessarily restricting Internet use.

However, there is little academic literature that addresses the impact of data caps. In this paper, we propose models to evaluate the impact of data caps upon subscribers and ISPs. The model includes the critical elements of both Internet architecture and economic motivations. We model user utility for the two dominant applications: web browsing and video streaming. Utility is represented as a function of a user?s relative willingness-to-pay, the time devoted to each application per month, and the performance of each application. User surplus is expressed as utility minus the opportunity cost of the time devoted. We model a monopolist ISP that sets tier prices, tier rates, network capacity, data caps, and overage charges in order to maximize subscription plus overage revenue minus the cost of capacity. Combining these two models, users choose a tier and decide how much time to devote, and correspondingly whether to violate the cap.

The analysis is based on optimization methods. We show how users fall into three categories: those unaffected by a cap, those who are capped but do not choose to exceed the cap, and those who exceed the cap and pay overage charges. We give closed form expressions for which users fall into each category, based on a user?s willingness-to-pay, opportunity cost, and the level of the cap. We then examine a monopolist?s use of caps. We compare the optimal tier rates, tier prices, and network capacity without caps to the same quantities when caps are added. We first consider the case in which an ISP institutes caps in order to ensure that heavy users pay an amount equal to the cost of their usage, and show how an ISP will set the cap and which users it will affect. We then consider the case in which an ISP sets caps and overage fees to maximize profit. We show that in this case, an ISP will increase the tier rate and decrease the tier price. We also give closed form expressions, under certain assumptions, for which users will be hurt and which will benefit from the cap and changes in tier rate and price.

Finally, we give numerical results when users have a constant elasticity of demand. We illustrate how the tier rate, tier price, cap, and overage fees vary with the standard deviation in Internet usage amongst subscribers. We also illustrate the increase in ISP profit when caps are used, the corresponding change in user surplus, and the change in total social welfare.

Moderators
Speakers
avatar for Scott Jordan

Scott Jordan

University of California


Saturday September 28, 2013 12:20pm - 12:50pm
GMUSL Room 120

12:20pm

12:50pm

Lunch and Student Paper Awards
Saturday September 28, 2013 12:50pm - 2:00pm
Multi purpose Room

2:00pm

Data, Trade, and Growth
Download Paper

Cross-border data flows are a major aspect of today?s global economy. Moreover, the volume of cross-border data flows is growing at a rapid rate, with data flowing through terrestrial and submarine cables, and to a much lesser extent via satellites. According to TeleGeography, a consulting firm that keeps track of international data flows, demand for international bandwidth increased at a compound annual rate of 49 percent between 2008 and 2012.

Cross-border data flows are also increasingly critical in trade negotiations. Cross-border data flows are one of the main subjects of the new round of trade negotiations announced by the United States Trade Representative, Ambassador Ron Kirk in January 2013. Similarly, the European Union is considering new data privacy regulations that would impact flows of data in and out of the EU.

Ironically, statistics about the magnitude of cross-border data flows are scarce, despite their growing economic and political importance. The trade report from the Census Bureau and the Bureau of Economic Analysis contains some information about imports and exports of telecommunications services, but we shows that these figures likely miss much of the increase in cross-border data traffic because of fundamental changes in the structure of global networks. Similarly, international agencies such as the ITU only collect fragmentary statistics on cross-border data flows, though they are putting more effort into estimating such figures.

Moreover, we may not have the right conceptual framework for thinking about the economic impact of cross-border data flows. The usual categories of exports and imports don?t apply very well to cross-border data flows, since it?s not clear that an outflow of data from a country should count as an export. Indeed, long-established conventions treat outgoing international phone calls as imports, even though both the originating network and the receiving network play an equal role in the call. We will see below the assignment of outgoing international phone calls to the import category is essentially an artifact of regulation.

This paper is intended as a preliminary exploration of the empirical, conceptual, and policy aspects of cross-border data flows. There are three main results. First, we do a rough comparison of the magnitude of cross-border flows of data with all internet/IP traffic. We find that for the United States, cross-border flows are roughly 16-25% of all U.S. data traffic. Cross-border data flows between Europe and the rest of the world equal roughly 13-16% of all European data traffic. These estimates, which should be treated as highly imprecise and tentative, suggest that the United States is more interconnected with the rest of the world than Europe.

Second, we consider the meaning of data trade between countries. We propose that replication of data between countries is a more useful and intuitive concept than either imports or exports of data. When companies such as Google or Akamai devote resources to replicating data from one country in another country, its relative utility goes up in the second country because it can be accessed with faster response times (latency) and at a higher quality.

Third, we note some initial policy implications of data trade as replication. In particular, replication acts as an increase in the intangible capital stock of a country, boosting its economic output at a relatively low cost. This result suggests that taxing or regulating data flows can potentially have a harmful effect on economic growth.

Note: An early draft of this paper was presented at a non-reviewed conference on ?Measuring the Effects of Globalization? (Washington DC, February 28-March 1, 2013).

Moderators
Speakers
MM

Michael Mandel

Progressive Policy Institute


Saturday September 28, 2013 2:00pm - 2:35pm
GMUSL Room 221

2:00pm

The Law and Economics of the FCC's Transaction Review Process
Download Paper

This article assesses the FCC?s current policies and rules regarding transaction reviews, concluding that the Commission?s current spectrum transfer review process harms consumer welfare. In particular, the FCC?s spectrum screen as currently structured, its standard of review for spectrum transfers, its use of conditions, as well as the scope of its transaction reviews exceed legal limits, impede efficient markets for spectrum, and deter welfare-increasing transactions and investment.

First we explain the FCC?s current policies and decisions regarding transaction reviews and assess their appropriateness with respect to the Commission?s authorizing legislation, regulations and case law. With respect to the scope of its transaction reviews and its use of conditions in particular, we find that the FCC?s practices exceed their permissible limits.

Next we address the economics of the FCC?s policies and decisions, explaining and assessing the animating economic logic behind the FCC?s actions. We demonstrate that the FCC?s current spectrum screen and transaction review standards rest on the premise that spectrum concentration in markets inherently leads to anticompetitive behavior. Further, we explain the flaws in this premise.

In demonstrating and assessing the basis of the FCC?s transaction reviews, we discuss the particulars of the FCC?s spectrum screen in detail, focusing on its use of concentration metrics and claims that its full analysis (beyond the initial screen) investigates competitive conditions more broadly. As we discuss, the Commission uses HHIs and spectrum concentration measures improperly as de facto triggers for per se illegality, rather than triggers for further investigation. Further, none of the full analyses described by the Commission investigates an aspect of competition other than market or spectrum concentration; instead, they simply restate in more detail the structural analysis implied by the HHI test and spectrum screen.

Addressing the economics underlying the FCC?s actions, we demonstrate that both economic theory and evidence indicate that the presence of more competitors in telecommunications markets does not necessarily result in lower prices and better service for consumers. Particularly in industries (like wireless) that are characterized by rapid technological change, non-horizontal competitive constraints and shifting consumer demand, the threat of entry and the need for repeated contracts with input providers with market power operate to constrain strategic behavior, even in heavily concentrated markets.

Further, because mobile competition requires substantial upfront investment and the acquisition of large swaths of spectrum to achieve minimum viable scale, efficient market structure tends to be oligopolistic or monopolistic. Spectrum is only a small fraction of what it takes to succeed in the wireless industry, as making effective use of that spectrum requires towers, switches, routers, security, maintenance, customer service and innovation, along with risky investments in all of these. Thus, while AT&T and Verizon may be the two largest holders of wireless spectrum, they have also invested substantially more in their network infrastructures and in technological innovation, and have developed and offered considerably more products and services than their competitors in order to take advantage of their spectrum holdings. The welfare effects of spectrum concentration are at worst ambiguous, and, as we demonstrate, as the market has grown more concentrated, investment, coverage and product diversity have increased while prices for consumers have decreased. These results are consistent with a more robust model of firm behavior in the industry that takes account of entry threats and technological change.

We conclude with a discussion of the policy implications and suggestions for reform.

Moderators
Speakers
avatar for Geoffrey Manne

Geoffrey Manne

Executive Director, International Center for Law & Economics
avatar for Ben Sperry

Ben Sperry

International Center for Law and Economics


Saturday September 28, 2013 2:00pm - 2:35pm
GMUSL Room 225

2:00pm

Sustainable Broadband: A Monitoring Framework for Broadband Policy in Rural Areas in the Netherlands
Download Paper

As General Purpose Technologies (GPTs) (Bresnahan & Trajtenberg, 1995), information and communication technologies (ICT) have been applied in different ways across companies and regions. The differences in ICT adoption across regions have been approached from a variety of perspectives, e.g. from the business users (Forman, 2005) and residential users point of view (Goldfarb & Prince, 2008) showing that regional differences are key characteristics of ICT growth. But these spatial disparities have to be approached with caution as technological characteristics of broadband are important to define rural areas (Grubesic & Murray, 2002). Even if rural areas in the European Union are defined according to technological and geographical dimensions (CEU, 2012), there is ample of room for a variety of interpretations by market parties and local governments. In case of (local) government intervention in broadband markets, there is a need to evaluate broadband initiatives with respect to their economic benefits in rural areas.



In order to evaluate the economic impact of broadband, numerous studies in particular in the United States have been conducted to examine the link between broadband availability and a number of economic characteristics of rural regions (Connected Nation, 2008; Gilllett, Lehr, Osorio, & Sirbu, 2006; Whitacre, Gallardo, & Strover, 2013). Just recently the literature has focused on elements which will put broadband on a sustainable path, i.e. taking environmental implications of ICT into account (Røpke, 2012). A number of evaluation exercises of broadband programs have been conducted showing that ?encouraging broadband adoption is only part of a larger digital literacy effort, and programs work when they make non-users want to connect, make the Internet cheaper and easier to use, and adjust to users? preferences? (Hauge & Prieger, 2010). However, there currently there is no generally accepted monitoring framework that can be used to evaluate and monitor the (social) costs and benefits of broadband programs in rural areas.



The paper contributes to the discussion in providing a monitoring framework for sustainable broadband in rural areas. This framework is developed by using the (social) cost-benefit analysis (SCBA) methodology applied to (all) large infrastructure programs in the Netherlands since 2003 (Eijgenraam, Koopmans, Tang, & Verster, 2000). This methodology is applied to investigate the implementation of broadband technologies in rural areas. In this context, a DPSIR scheme (originally developed as a framework for environmental indicators) is applied to monitor and a SCBA to evaluate the choice between the different alternatives (Eijgenraam, et al., 2000). As proposed by Ramirez (2007), such alternative monitoring framework is needed to evaluate economic progress in rural areas (Ramírez, 2007).



As part of NGA networks (CEU, 2009, 2012), fiber technologies are increasingly considered by provincial governments in the Netherlands as the most future-proof technology of broadband infrastructure in rural areas. Different provinces in the Netherlands have set up large infrastructural broadband funds aimed at stimulating local economic development and bridging the digital divide between rural and urban areas (Provincie Noord-Brabant/ Dialogic, 2012; Provincie Overijssel, 2010). It is expected that fiber technologies will provide higher value-added to society (compared to traditional broadband technologies such as DSL or cable modem technologies) over the long term. For rural areas, these benefits have, in particular, been related to the arrival of e-health services, teleworking and the renewal of rural areas. For residential users, benefits from e-health services are emerging due to smart homes solution supporting greater self-reliance of the elderly which allow people to stay longer at home. For business users, benefits from teleworking are related to the reduction of travel time (Van der Wee et al., 2012). New NGA technologies can also support the renewal of rural areas (McGranahan & Wojan, 2007; Stephens & Partridge, 2011). To complicate matters, provincial governments are facing a number of options with respect to evaluating different project proposals for rural broadband. For example, to what extent do these proposals reflect the goals of provincial plans for rural areas (e.g. do they support renewal of rural areas; do they improve or enhance self-reliance of inhabitants?); to what extent do these proposal opt for the ?right? NGA network technology?; do they support a competitive supply structure and new trans-sectoral services? In this context, a monitoring framework for broadband policy in rural areas in the Netherlands will provide more certainty with respect to interventions by provincial government, a better evaluation of the emerging (social) benefits and their allocation with respect to the different stakeholders in provincial programs.



References:

Bresnahan, T., & Trajtenberg, M. (1995). General Purpose Technologies: Engines of Growth? Journal of Econometrics, 65(1), 83-108.

CEU. (2009). Community Guidelines for the Application of State Aid Rules in Relation to Rapid Deployment of Broadband Networks (Final Document) 17 September 2009. Brussels: CEU.

CEU. (2012). EU Guidelines for the Application of State Aid Rules in Relation to the Rapid Deployment of Broadband Networks (Draft). Brussels: CEU.

Connected Nation. (2008). The Economic Impact of Stimulating Broadband Nationally: Connected Nation.

Eijgenraam, C. J. J., Koopmans, C. C., Tang, P. J. G., & Verster, A. C. P. (2000). Evaluatie van Infrastruurprojecten. Leidraad voor Kosten-Batenanalyse. Den Haag: CPB en NEI.

Forman, C. (2005). The Corporate Digital Divide: Determinants of Internet Adoption. Management Science, 51(4), 641-654.

Gilllett, S., Lehr, W., Osorio, C., & Sirbu, M. (2006). Measuring the Economic Impact of Broadband Deployment. Final Report. February 2006: U.S. Department of Commerce, Economic Development Administration.

Goldfarb, A., & Prince, J. (2008). Internet Adoption and Usage Patterns are Different: Implications for the Digital Divide. Information Economics and Policy, 20, 2-15.

Grubesic, T. H., & Murray, A. T. (2002). Constructing the divide: Spatial disparities in broadband access. Papers in Regional Science, 81(2), 197-221.

Hauge, J., & Prieger, J. (2010). Demand-Side Programs to Stimulate Adoption of Broadband: What Works? Review of Network Economics, 9(3).

McGranahan, D., & Wojan, T. (2007). Recasting the Creative Class to Examine Growth Processes in Rural and Urban Counties. Regional Studies, 41(2), 197-216.

Provincie Noord-Brabant/ Dialogic. (2012). Digitale Agenda van Brabant. Den Bosch / Utrecht: Provincie Noord-Brabant/ Dialogic.

Provincie Overijssel. (2010). Breedbandnetwerk in Overijssel. Statenvoorstel nr. PS/2010/1031.

Ramírez, R. (2007). Appreciating the Contribution of Broadband ICT With Rural and Remote Communities: Stepping Stones Toward an Alternative Paradigm. The Information Society, 23(2), 85-94.

Røpke, I. (2012). The unsustainable directionality of innovation ? The example of the broadband transition. Research Policy, 41(9), 1631-1642.

Stephens, H. M., & Partridge, M. D. (2011). Do Entrepreneurs Enhance Economic Growth in Lagging Regions? Growth and Change, 42(4), 431-465.

Van der Wee, M., Driesse, M., Vandersteegen, B., Van Wijnsberge, P., Verbrugge, S., Sadowski, B., et al. (2012). Identifying and quantifying the indirect benefits of broadband networks: a bottom-up approach. Proceedings 19th ITS Biennial Conference 2012, Bangkok, Thailand. .

Whitacre, B., Gallardo, R., & Strover, S. (2013). Rural


Saturday September 28, 2013 2:00pm - 2:35pm
GMUSL Room 120

2:00pm

A World Without Cablevision nor Sony: How Japanese Courts Find Providers of Personal Locker and Content-Sharing Services Liable
Download Paper

Japan presents an interesting example of how intermediary liabilities could be handled by courts with primary responsibilities on the provider side, not the end-users. This paper examines a series of Japanese court cases on copyright infringement liability of online service providers, in comparison to the U.S. cases. Combined, it presents a long-overdue broad picture of relevant cases in Japan and the U.S., which is lacking even in Japanese legal scholarship.

The Japanese courts found a number of providers to be direct infringers of copyright when, on the surface, they were merely providing a content-sharing platform or personal locker service. There are two counter-intuitive and critical grounds shown in these Japanese opinions, and they are contrasting to the ones found in key U.S. cases. First, those providers could be regarded to be the direct infringers of the uploaded content because the volition of the infringement could well be at the side of the provider, not its end-users (the volition question). What could otherwise be an unauthorized yet legal private copying by an end-user become illegal copying by a service provider under this line of thinking. There had been a few key criteria cited to determine the volition of the infringement, but one Supreme Court decision came with a note suggesting that there should not be such fixed set of criteria, and the court would engage in normative determination of who should be the uploader of a copyrighted material.

Second, the providers are regarded to be engaged in "public transmission" of copyrighted materials when a personal locker service is available to either unspecific OR a large number of person(s) (the public performance question). The end-user?s downloading of copyrighted materials from a personal locker, following this way of thinking, is actually public transmission of those materials by the provider.

The first part of the paper discusses Japanese historical context for the volition question, focusing mainly on the pioneering Club Cat?s Eye decision from the 1980?s where the owner of a karaoke bar, not the customers who enjoyed singing, was found to be the direct infringers of copyright, and determined to have engaged in the singing. What lies behind this and subsequent decisions is the lack of injunctive remedy for indirect infringement, which, according to a number of legal scholars, pushed courts to find the providers to be directly infringing. Some contrasts with Sony decision are highlighted.



The second and third parts of the paper discuss the recent cases such as MYUTA (multi-device personal music locker service), TV Break (video sharing), and RokuRaku II (remote DVR/routing of TV programs to devices). They highlight difference with such U.S. cases as Cablevision, YouTube, and MP3Tunes, where, the volition and public performance questions are answered in a contrasting manner.

Finally, the paper reviews recent Japanese legal and policy discussions around these issues to examine possible legislative reactions. The cloud computing-friendliness is an important consideration for copyright reform, and the issue of provider liabilities is a concern. Based on several interviews conducted by the authors with Japanese providers and IP/ Internet lawyers, these court cases have had actual effect to chill the development of new cloud-based services in Japan. However, no short-term solution is conceivable. There has been a suggestion to introduction of the indirect infringement into the Japanese copyright law, and thereby limiting the scope of actions that could be deemed direct infringement. Yet certain rights holders have been resistant to the idea and expressed the suppport for keeping the current state of the law.


Saturday September 28, 2013 2:00pm - 2:35pm
GMUSL Room 332

2:00pm

Technical Principles of Spectrum Allocation
Download Paper

Spectrum allocation and management is an ongoing process that will benefit from guidance by a set of fundamental technical principles. Historically, spectrum allocation has been an ad hoc, piecemeal system driven by the logic of the moment: in most cases, a commercial enterprise or government agency with a need requested an allocation, and if the regulator agreed, it allocated the best available fit from the inventory. In other cases, spectrum assignment has been initiated by the regulator itself, either to good effect or otherwise.

The result of 80 years of ad hoc allocation is a system in which neighboring allocations sometimes pose tremendous burdens on each other, particularly in cases where high-power systems adjoin low-power ones. Such allocation errors give rise to intractable disputes over spectrum usage rights. Market dynamics are helpful, but not altogether sufficient to create a system of rational allocation as each player maximizes its own interests, which in the short term preserve inefficient allocations in the overall frequency map.

A more rational system of spectrum assignment would respect the principles that are evident in the operation of actual high-demand, high-performance, and high-efficiency wireless networks and in the trajectory of near-term spectrum research and development. In brief, these principles are:

1. Power Compatibility: Adjoining allocations with similar power levels are more valuable than those in which power levels are dissimilar.

2. Sharing: Assignments that serve multiple users or applications are more valuable than those that serve a single user.

3. Dynamic Capacity Assignment: Assignments that allow capacity to be adjusted on demand are more valuable than those that allocate capacity statically.

4. Technology Flexibility: Assignments that permit technology upgrades which increase usage efficiency are more valuable than those without an upgrade path.

5. Aggregation Efficiency: Large allocations are more valuable than small ones as they minimize guardband losses.

6. Market Competition: Allocations that create marketplace competition are more valuable than those that don?t.

7. High-Performance Receivers: Allocations that incentivize the deployment of high-performance receivers are more valuable than those that can?t tolerate common sources of RF noise.

8. All Relevant Dimensions: Allocate ?patches? of spectrum by frequency, power level, place, transmission direction, beam spread, modulation, coding, polarization, quantum states, and time.

9. Redeployment Opportunities: Allocations that free up spectrum for new systems, such as DTV, are more valuable than those that preserve status quo.

10. Support the Research Agenda: Allocations that support R&D are generally more valuable than those that don?t.

These allocation principles flow from empirical knowledge of the nature of spectrum, the current state of the art in radio engineering, and the likely timeline of new developments in radio engineering.

They are complemented by an analysis of research initiatives on spectrum utilization that may make the entire enterprise of spectrum allocation by regulators moot.

Moderators
Saturday September 28, 2013 2:00pm - 2:35pm
Founders Hall 111

2:35pm

A Quantitative Approach to Include Time and Innovation in Traditional Market Analysis
Download Paper

Telecommunications markets enjoy frequent technological, service and business innovations. Those innovations very often change radically the industry economics and the competitive advantages of incumbent players and frequently allow new entrants to make inroads into the industry.

At the same time, telecommunications industry is either heavily regulated or at the least, faces close and continuous scrutiny by competition authorities. However, regulatory authorities usually assess the competitive situation of the industry using static equilibrium models that fail to consider the competitive effects of innovation. As a consequence, those market analyses do not properly represent the actual market behavior, and may lead the authorities to make decisions that drive the market into unexpected and undesired outcomes.

We propose an alternative economic model to analyze the telecommunications market, that of dynamically contestable markets. We combine the insights of the contestable market theory developed by Baumol, Panzar and Willig with our own analysis of innovation processes in the telecommunications markets and their impact into the competitive behavior of both incumbents and potential entrants.

The paper's methodology will have the following steps: An overview of telecommunications economics and drivers of industry structure; An analysis of innovation processes in telecommunications and their impact on industry structure; An analysis of the role of time in the evolution of market structure upon the launch of an innovation, including the main time parameters: innovation rate, relaxation time, fixed asset life, break even time; An assessment of the limitations of static equilibrium models (perfect competition and contestable markets) to represent the effects of innovation and investment in the telecommunications industry; A proposal of a dynamically contestable markets model, including a discussion of the conditions for this model to be applicable to a specific industry and the behavior of rational players under this model; A high level empirical analysis of the proposed model in two dimensions: Historical innovation rates in the telecoms market; Case studies of incumbent and potential entrant operator responses to actual or anticipated innovations; A summary of the conclusions and a discussion of their implications for policy making.

We expect to show that the innovation rate in telecommunications market has been so high that the market structure has not usually had time enough between every two of them to return to equilibrium. Therefore, static equilibrium models are not suitable to analyze them, even less to design regulatory obligations. We expect telecommunications markets to fulfill the conditions to be classified as dynamically contestable, where the competitive behavior of rational agents will be disciplined by the threat of triggering innovation in competitive technologies or business models that may lead to competitive entry into the market.

We expect our work to provide some guidance about how to more accurately analyze the behavior of real world telecommunication markets, as well as high level criteria to decide whether this model could be applied to analyze other markets that are also highly innovative and capital intensive.

Moderators
Speakers
avatar for Bruno Soria

Bruno Soria

Director Regulatory Services, Telefonica


Saturday September 28, 2013 2:35pm - 3:10pm
GMUSL Room 221

2:35pm

2:35pm

The Impact of Intelligent Infrastructure Investment on the Well-Being of a Country
Download Paper

There is significant debate among scholars and policymakers about the size and role of government. Many governments have responded to economic recession by cutting their expenses to balance their budgets. There are nonetheless some government expenses that are indispensable for a healthy national economy.

In this paper we argue that many nations that have achieved basic infrastructure provision need to move to the development of intelligent infrastructures through the use of ICTs. We believe that the impact that these technologies can have on the overall well-being of a nation are now significantly higher than further investment in other infrastructures.

We thus want to explore the impact that government expenditure, debt, and economic recession have had on infrastructure investments and the impact that these have on a country?s well-being as measured by the human development index.

Fifty years ago it was essential to provide sanitation and water to prevent illnesses that could have deadly consequences. Electricity has been crucial for overall economic activity from manufacturing to retail, and transportation has facilitated and expanded commerce. Information and communication technologies are now capable of supporting the development of intelligent infrastructures that can improve the well-being of a nation, including what scholars have called ?beyond GDP? metrics of development.

Current infrastructures are, for the most part, dumb; they don?t generate information that could be used to make decisions regarding maintenance, replacement or upgrades. Next generation infrastructures can use sensors and communication networks to provide relevant parties and decision makers with information that can facilitate decision making and potentially reduce operational costs. Highways for example can send information about congestion and road conditions. Electricity grids can provide information about demand, loads, and outages. Water pipes can send alert about unusual levels of usage and breakdowns.

Intelligent infrastructures rely on ICTs but governments, under both internal and external pressure to cut costs, may not be making the investments that these intelligent infrastructures require. It is our belief that by reducing investments in intelligent infrastructures they are potentially incurring higher costs in the future and reducing their country?s well-being relative to what it could be if appropriate investments are made.

This study uses a two stage statistical analysis of a panel of approximately 170 countries for a period of 10 years to determine how government debt and expenses affect infrastructures including intelligent ones (measured as the interaction between traditional infrastructure and broadband or secure Internet services. We want to identify the types of infrastructure that have the most impact on the human development index and how ICTs and infrastructures can contribute to the well-being of a nation.


Saturday September 28, 2013 2:35pm - 3:10pm
GMUSL Room 120

2:35pm

Prime-Time Any Time: The Effect of Time-Shifted TV on Media Consumption
Download Paper

With the convergence of telecommunication technologies, traditional telecommunication providers have been facing increasing competition both from their direct competitors, and from new online content providers. Faced with this strong competition, traditional telecommunication providers feel the pres- sure to innovate and offer more and better services to their clients. Nowadays many television bundles offer access to video-on-demand systems, TV over the internet in computers and mobile devices as well as digital video recording (DVR).



DVR became available as a response to the emergence of digital recording technologies such as TiVo (a device for for digital recording of television broadcasts released in 1999) (Bronnenberg et al., 2010) and it introduced the concept of time shifted television in the telecommunications and media industry.



Time shifted television increases the alternatives that consumers can choose from at any point in time and allow them to bypass the programming of the different networks for free. It empowers consumers at the expense of content providers and advertisers and it raises the marginal costs of service provision of the telecommunication firms. Time shifted television allows consumers to zip through commercials more easily (Wilbur, 2008). It makes predicting programme audience more difficult because the mix of consumers who will actually watch them may change (Anderson and Gans, 2008) and it also increases network bandwidth requirements making actual network planning and design more demanding and costly (Zhou and Lipowsky, 2004; Li and Simon, 2011).



Although this perspective on the impact of time shifted television on telecommunication firms and advertisers seems gloom, it is the actual behavior of consumers using these technologies that will determine whether they represent a real threat or opportunity for further business. However, so far there is little empirical evidence on how these technologies will actually affect consumers and their behavior towards television and telecommunication providers.



In this paper we characterize some of the changes that time shifted television is triggering on television consumption patterns and we discuss the implications for consumers and the industry stakeholders alike. We work with a large telecommunication provider that started offering to their clients a new time-shift service (hereinafter Look Back) that allows users to watch content originally aired up to 168 hours in the past for over 60 different television channels. We obtained a large dataset of television use, video-on- demand (VoD), internet consumption and consumer acquisition and churn that spans a period before and after the release of Look Back and we study its impact on consumer behavior.



Using a random sample of 50,000 time-shift events between September and December 2012 we find that most of these events pertain to content transmitted during the past 24 hours. Furthermore, most time-shift events are of content transmitted in the last few hours. This reveals that content is time sensitive, and that older content is not as attractive as newer one.

We also find evidence consistent with the superstar effect described by (Rosen, 1981): prime-time content is the most requested during prime-time, but also the most requested in off-peak hours. This suggests that being able to watch any content at any time increases the overall viewership of prime-time content in detriment of other content, skewing the content viewership distribution. This has important implications for TV and advertising industries: as time-shifting allows the disentangling of prime-time ? the time most people is watching TV ? from prime-content ? the content viewed by most people, the timing of (non-live) content exhibition becomes less important, and most of the value appropriation from advertising will shift to content itself.



Finally we show that Look Back increased the rate at which new consumers join the telecommunication provider, at the same time it also increased the rate at which consumers subscribing to bundles unable of time shifting decided to upgrade their service.



By characterizing user viewership patterns in Look Back an how the service affected consumer acquisition and churn in a large telecommunication firm, we are able to provide insight on the business risks, but also the business benefits that this type of service may bring to the industry as whole.


Saturday September 28, 2013 2:35pm - 3:10pm
GMUSL Room 332

2:35pm

The Emperor Has No Problem: Is There Really Wi-Fi Congestion at 2.4 GHz?
Download Paper

?Wi-Fi congestion is a very real and growing problem.? So said FCC Chairman Genachowski in a recent proceeding, and this sentiment is widely heard. But is it true? And what does the term congestion mean? We are aware of only three engineering studies that bear on these questions: Sicker et al., TPRC 2006; Mass Consultants for Ofcom, 2009; and Van Bloem & Schiphorst for the Dutch spectrum regulator, 2011. Unfortunately, these seminal efforts do not provide definitive answers. Beyond this, evidence for ?congestion? problems in the 2.4 GHz ISM band is anecdotal and inconclusive at best.

While some users no doubt sometimes perceive ?congestion? in some places, that is not sufficient to prove that there is a policy problem. We therefore set out to develop a list of user experience-oriented service impairment criteria that, if met, would prove that congestion exists to a degree that justifies regulatory intervention. While we focus on Wi-Fi services in the 2.4 GHz band, our proposed method of objective congestion criteria for packet based ("all-IP") communication is generalizable to other cases, e.g. the discussion about a "spectrum crunch" in cellular bands. This is a step towards a larger research program that integrates concerns about occupancy, congestion, utilization, quality of service and user satisfaction into replicable metrics to characterize band use.

We base our work on a review of prior studies, new experimental data, and an analysis of the engineering factors underlying what is perceived as ?congestion?. In order to provide solid experimental evidence, we are planning lab and user experiments on service levels in 2.4 GHz Wi-Fi networks. We aim to collect data including performance logs for a campus network and massive network congestion with large numbers of users; user satisfaction surveys at times of different network load; and bench tests of congestion/degradation for different traffic types and use scenarios.

We understand claims about band congestion and spectrum shortage to be claims about achievable service levels, i.e. the degree to which users in a band can fulfill their service needs. As is well known in the literature, poor performance against commonly used low-level metrics such as spectrum occupancy, utilization or congestion do not necessarily imply service degradation; further, degradation does not necessarily mean that service needs cannot be met; and none of these metrics take economic utility (user willingness to pay for a given service level) into account.

We therefore propose that ?congestion? rises to a regulatory problem if scenario-specific performance metrics are reduced significantly, leading to a significant increase in the percentage of users who can?t complete a valuable task, on a persistent, ubiquitous basis in two or more key scenarios in a band; and that this service degradation occurs in spite of the use of state-of-the-art best practices, such as careful frequency planning, the use of complementary bands, and careful access point placement. The numerical values of parameters for significant performance reduction, percentage of users affected, recurrence and ubiquity will be informed by the experimental work described above.

Based on the collected experimental data, analysis and the resulting criteria we will then evaluate the anecdotal claims of congestion. Our preliminary conclusion, to be tested as work progresses, is that neither engineering studies nor stories in the press support the claim of widespread service level failure, contrary to received wisdom. So far at least, Wi-Fi congestion is not a real problem.

Moderators
Speakers
AA

Andreas Achtzehn

Institute for Networked Systems, RWTH Aachen University
avatar for Jean Pierre de Vries

Jean Pierre de Vries

Co-director, Spectrum Policy Initiative, Silicon Flatirons Center


Saturday September 28, 2013 2:35pm - 3:10pm
Founders Hall 111

3:10pm

More than Tools: ICTs Influencing Social Movement’s Opportunity Structures
Download Paper

The objective of this research is to explore the role of Information and Communication Technologies (ICTs) in the emergence and development of Social Movements. We ask: what role do ICTs play in providing or impeding opportunities for social movements to participate in political processes?



Opportunity structures are signals identified as such by a group of actors, in this case members of social movements, who are sufficiently organized to act on it (D. McAdam et al., 1996; J. D. McCarthy, 1996). So far, research has shown that ICTs influence movements? opportunity structure by making it easier to identify elites, acquire information about international events, and make it more difficult for governments to regulate and censor the flow of information (Diani & McAdam, 2003; Garrett, 2006). However, research has not addressed whether ICTs represent an opportunity to access and participate in political processes.



We investigate whether ICTs are opportunity structures in their own right by determining two things: (1) if they are perceived as opportunity structures by members of the social movement, and (2) if they are strategically included in movement practices like forming alliances (Tarrow, 201; N Van Dyke, 2010; King, 2010; Smith 2010); and fighting repression (Althusser, 2001; Shurrman, 2012, Walder, 2009; Siegel, 2011; Carty, 2011; Shirky, 2011).



We conducted a mixed-method case study of the Honduran Resistance Movement, which rose after a Coup d?état in 2009. This case is relevant for two reasons: one, it was the first in wave of revolts around the world, which relied heavily in ICTs to get organized; two, it is located in Latin America, an area that is going through important political and economic changes, which has been overlooked in studies on social movements and ICTs. We interviewed members of the movement, civil society organizations, the government and international organizations. We conducted observation of mobilizations and meetings, and collected qualitative data from newspapers and social media.



Preliminary results indicate that ICTs play a determinant role in this social movement not only as mere tools, but also as integral part of their social processes and structures. Members of the movement conceive ICTs as opportunity structures; they make sense of them through their use and re-arrangements and include them strategically in their repertoire of mobilization. We deduce that ICTs are opportunity structures in their own right as they were used strategically to participate in political process by facilitating the creation of new alliances, fostering collaboration and dialog across institutions, and providing tools to fight impunity and repression. We propose a research agenda to further explore ICTs as opportunity structures.



The significance of this work is showcased in the increasing uses of ICTs by social movements around the world, like the Occupy Wall Street, the Egyptian uprising, and the Chilean student?s movement. It is also highlighted through governmental and international organizations support of policies to help civil society build their digital capacity.


Saturday September 28, 2013 3:10pm - 3:45pm
GMUSL Room 221

3:10pm

Peer-to-Peer File Sharing and Cultural Trade Protectionism
Download Paper

We consider consumers' sharing of media content online, such as their exchange of film and music over file sharing networks, and examine its long-term implications for cultural policy and cultural diversity. Because cultural goods convey national identity and values, governments have often intervened to increase the consumption of domestic content. We present a simple trade model suited for the media sector and introduce political preferences over cultural consumption to explain content quotas (or screen quotas), a common form of protectionism in the sector. We show that the advent of online sharing, which allows consumers to bypass commercial distribution channels to access media content, renders quotas ineffective. We analyze the implications of online sharing for cross-border commercialization and consumption of cultural goods, find that multilateral cooperation to eliminate quotas is desirable if online sharing cannot be blocked or severely punished, and argue that it creates a threat to cultural diversity.

Moderators
Speakers
EN

Eli Noam

Columbia University


Saturday September 28, 2013 3:10pm - 3:45pm
GMUSL Room 225

3:10pm

Broadband's Contribution to Economic Health in Rural Areas: A Causal Analysis
Download Paper

The diffusion of broadband Internet access across America during the 2000s brought with it a significant amount of concern that rural areas might be left behind in terms of the availability, adoption, and benefits of this technology. i. The presence of a rural-urban broadband ?digital divide? is well documented in the economic literature. ii. Several studies have suggested that broadband access positively impacts employment as well as other quality of life issues (health care, education, social linkages) in rural areas; iii. however, many analyses were based on hypothetical assumptions or case studies.

State and federal policies related to broadband investment and deployment remain contested. As the funds flowing to additional advanced broadband infrastructure under the American Reinvestment and Recovery Act end and as the FCC initiates changes to universal service, questions regarding where American broadband infrastructure stands and how high speed capability contributes to economic development are receiving new attention. Better data from the NTIA/Census Current Population Supplements and the more detailed data from NTIA?s investments in statewide mapping can be used to provide better responses to questions regarding broadband?s contributions to economic health.

This paper uses the latest data on both broadband availability and adoption to empirically gauge the contribution of broadband to the economic health of rural areas. We utilize availability data from the National Broadband Map aggregated to county level, and county-level adoption data from Federal Communication Commission?s Form 477. Economic health variables of interest are gathered from a variety of sources and include median household income, number of firms with paid employees, total employed, percentage in poverty, and the percentage of employees classified as either creative class or non-farm proprietors. A propensity score matching technique (between a ?treated? group and a control group) is used to make causal statements regarding broadband and economic health. We measure whether the growth rates between 2001 and 2010 for different economic measures are statistically different for the treated and non-treated groups, restricting the analysis to non-metropolitan counties.

Results suggest that high levels of broadband adoption in rural areas do causally (and positively) impact income growth between 2001 and 2010 as well as (negatively) influence poverty and unemployment growth. Similarly, low levels of broadband adoption in rural areas lead to declines in the number of firms and total employment numbers in the county. Broadband availability measures (as opposed to adoption) demonstrate only limited impact in the results.

We also use the FCC data to assess whether Connected Nation participants in two states achieved higher increases in broadband providers or adoption rates when compared to similar counties that did not participate in the program. We find that the program has a dramatic influence on the number of residential broadband providers in non-metro counties but no impact on increases in broadband adoption rates compared to otherwise similar non-participating counties. Consequently, there is no particular effect on economic growth.

This research offers an empirical documentation of the causal relationship between
broadband and economic growth in rural areas, and offers an outside assessment of the highly visible Connected Nation program. It meshes the latest data on availability with that for adoption, and incorporates a technique that can make causal claims about how broadband affects specific measures of economic health. Thresholds for adoption / availability can be identified as having measurable impact on economic growth rates in rural areas, which leads to policy prescriptions.

NOTES:
i Malecki, 2003; Parker, 2000; Strover, 2001.
ii Whitacre and Mills, 2007; Dickes, Lamie and Whitacre, 2010; Strover, 2011.
iii Katz and Suter, 2009; LaRose et al., 2008, Stenberg et al., 2009.

Moderators
Speakers
BW

Brian Whitacre

Oklahoma State University, Oklahoma State University


Saturday September 28, 2013 3:10pm - 3:45pm
GMUSL Room 120

3:10pm

Trade Dress 2.0: Trademark Protects in Web Design What Copyright Does Not
Download Paper

This comment argues that businesses should use trade dress to protect the web design of their websites. Currently, protections exist under copyright, however, the limited right of copyright is not broad enough to protect all elements of web design. Trade dress would provide broader protections because it is based on a generalized likelihood of confusion analysis. Additionally, rather than the creative basis of copyright, the commercial underpinnings of trademark law are more effective to protect the value of the website, which is the user interface. Practically, the copyright infringement analysis does not provide the full protections that a trade dress “likelihood of confusion analysis” would provide. Courts should use trade dress analysis because: 1) web designers need to protect the user interface of their programs, which mirrors the aims of trademark to protect the relationship of the consumer to the business; 2) the legal analysis for finding trade dress infringement protects the composition of the web design over the individual elements, which is the reverse of the copyright infringement analysis for software; and 3) the functional elements of trade dress are not automatically removed from the confusion analysis, which is the reverse of the copyright infringement analysis.

Moderators
Speakers
avatar for Gregory Melus

Gregory Melus

Juris Doctor, American University, Washington College of Law


Saturday September 28, 2013 3:10pm - 3:45pm
GMUSL Room 332

3:10pm

What About Wireless? An Investigation of Mobile Broadband in Fibre to the Home Environments
Download Paper

This paper explores the development and use of wireless broadband networks in the context of national telecommunications infrastructure development strategies that appear to focus on the rollout of fixed broadband networks. Building on our previous studies of next generation fixed broadband rollouts in Australia, Canada, New Zealand and Singapore (Middleton & Given, 2010), we explore policy objectives and the regulatory environments that guide the development of wireless broadband services in these countries. We couple these data with insights on consumer usage of wireless broadband services, with a view to understanding whether the current approaches to wireless broadband development are sufficient to meet growing demand for ubiquitous connectivity and to ensure stated government objectives around the provision of affordable, reliable broadband infrastructure.

Moderators
Speakers
CM

Catherine Middleton

Canada Research Chair, Ryerson University
Ryerson University -


Saturday September 28, 2013 3:10pm - 3:45pm
Founders Hall 111

4:10pm

Smartphones as a Substitute ? Why Some Smartphone Users Aren't Subscribing at Home
Download Paper

For many Americans, mobile broadband represents a complement to their home broadband service. Others, though, rely on their smartphones as their sole means of Internet access, creating a new class of broadband users who not only go online via a different type of device, but have an entirely different online experience. It is vital to understand why so many Americans rely on smartphones instead of home broadband service. While previous research has addressed factors impacting individuals? decisions to subscribe to home broadband service, similar attention has not been paid to these ?mobile-only? subscribers who use smartphones instead of subscribing to home broadband service.

Using a rich dataset collected by Connected Nation through 9,607 Random Digit Dial (RDD) telephone surveys conducted across a heterogeneous selection of states in 2012, this study examines why some individuals choose to be mobile-only subscribers while others also subscribe to home broadband service. This study aims to shed light onto the reasons why mobile-only subscribers first chose to subscribe to broadband service on their smartphones, as well as reasons why these individuals choose not to subscribe to home broadband service. The goal of this research is to educate policymakers and broadband providers regarding factors that make mobile broadband more attractive than home broadband service for millions of Americans.

Using cross-tabulated survey results and a binary logistic regression model, the authors use data from these surveys to measure the impact of geographic and socioeconomic factors such as an individual?s race, ethnicity, income, age, education, gender, state and county of residence, and disability status on whether that individual will be a mobile-only subscriber. In addition, this study explores survey data regarding how mobile subscribers use their smartphones, how much they report paying per month for their mobile broadband service, how often they go online, and what convinced them to subscribe to mobile broadband service. Comparisons are then made between mobile-only subscribers and those who subscribe to both home and mobile broadband service.

This study concludes that there are measurable differences between mobile-only subscribers and those who subscribe to both home and mobile broadband service in terms of their demographic profiles, the online applications they use, as well as the reasons why they subscribe to mobile broadband service on their smartphones. These differences can be important for policymakers as they attempt to increase home broadband service for all Americans and determine whether mobile broadband service should be considered a substitute or a complement for home broadband service.

Moderators
Speakers

Saturday September 28, 2013 4:10pm - 4:45pm
GMUSL Room 221

4:10pm

No Dialtone: The End of the Public Switched Telephone Network
Download Paper

All good things must come to an end. The set of arrangements known as the Public Switched Telephone Network (PSTN) is the foundation for the modern global communications system, and the myriad benefits it delivers. Today, the era of the PSTN is swiftly coming to a close. The PSTN?s technical, economic, and legal pillars have been undermined by three developments: the rise of the Internet; customers and providers abandoning wireline voice telephony; and the collapse of the regulatory theory for data services under the Communications Act. This paper provides a framework for moving beyond the PSTN, by distinguishing the aspects of the existing system that should be retained, reconstituted, or abandoned.

The transition from the PSTN to a broadband network of networks is the most important communications policy event in at least half a century. It calls into question the viability of the Federal Communications Commission, the Communications Act, and the telecommunications industry as we know it. Yet the significance of the transition is not widely recognized. Attention has focused on specific manifestations and consequences, such as the rise of ?wireless-only? households and problems with rural call completion.

The time has come to address the situation squarely. The lesson from prior structural transitions in communications such as digital television, the AT&T divestiture, and the opening of local telephone competition is that, with good planning and the right policy decisions, they can proceed smoothly and open new vistas for competition and innovation. Without them, they are dangerous opportunities for chaos that can gravely harm the public interest.

There are two mainstream views about how to handle the PSTN transition. One is that it represents the completion of a deregulatory arc begun at divestiture and accelerated by the Telecommunications Act of 1996. The other is that longstanding regulatory obligations need only to be extended to a new world. Both are wrong, because they treat the PSTN as a unitary thing. What we call the PSTN is actually six different concepts: 1) a regulatory arrangement; 2) a technical architecture to provide a service (wireline voice telephony) 3) a business and market structure; 4) interconnection obligations; 5) a social contract; 6) strategic national infrastructure.

These aspects define not only the obligations on network operators (such as common carriage, universal service, and law enforcement access), but also demarcate the roles of the FCC and other agencies (including state public utility commissions, the FTC, the Department of Justice, and NTIA).

The six-faceted definition of the PSTN provides a roadmap for the transition. The elements earlier on the list are rooted in the particular historical, legal and technical circumstances that gave birth to the PSTN. They are anachronistic in the current environment, and should be restructured or, when appropriate, eliminated. The later elements are public policy obligations that should be satisfied regardless of the historical circumstances. Separating the dimensions of the transition in this way also makes it possible to focus in on specific challenges, such as the technical standards for addressing and service interconnection in an all-IP environment, separate from broad public interest questions such as the level of functionality that should be guaranteed to all Americans.

By examining the content of each aspect of the PSTN in light of the business and technological landscape today, we can ensure that the transition to a digital broadband world reinforces, rather than undermines, the achievements of the past century of communications policy.

Moderators
Speakers

Saturday September 28, 2013 4:10pm - 4:45pm
GMUSL Room 225

4:10pm

Platform Models for Sustainable Internet Regulation
Download Paper

A substantial challenge in developing regulatory theory to support communications policy is the highly dynamic technology and business practices in the evolving Internet. Traditional regulatory theory in this sector relied on simple models of technology, such as copper pairs carrying telephone service to homes. Innovative uses of that copper pair (e.g., DSL) and advanced technologies such as HFC, fiber and mobile, have led to definitional confusion, litigation, and a dauntingly complex, less understood, networked ecosystem. The impending convergence of virtually all communications services using the Internet Protocol (IP), both in public and private networks, renders the complexity and ambiguity even worse.

The goal of this research is to derive models of technology and industry practice that are general enough to survive current rates of innovation and evolution, and stable enough to support construction of regulatory theory that can remain relevant through this continuing evolution of our communications infrastructure. We will describe two models, an Interconnection Model and a Platform Layer Model, and describe technology and business practices that underpin these models. Part of this description will include anecdotal evidence of the shifts in power among classes of ISP, e.g., access, content, backbone, etc.

We will use three case studies to illustrate how these models can provide effective and consistent guidance to policy debates. In the process we will identify metrics that could bridge between technology and regulation, such as ways to analyze market power based on the best available data on interconnection patterns.

Our first case study is an imminent, economically inevitable innovation about to shatter our already cracking models of communication regulation: the "converged IP network", often naively conflated with the public Internet. While the public Internet is one of many services that can be delivered over IP, many other "specialized services" including some VoIP services may use Internet protocols but not public Internet transport. The FCC introduced the term "specialized services" with no clear definition of what it is much less how or if it should be regulated. Our Platform Layer model relies on terminology and theory from the economics literature to show how such a model can inform policy discourse on this innovation.

Our second case study is interconnection, particularly the complexity associated with proliferating CDNs and Internet exchanges (IXes). Although the FCC has recently focused on broadband access issues, interconnection points among ISPs have become another opportunity for discriminatory behavior in terms of either performance or pricing. The emergence and influence of the Content Distribution Network industry, which now sources most traffic consumed by users, increases the complexity of both routing and the economics of interconnection. Several peering disputes involving CDNs have grown publicly contentious, including assertions that ISPs deliberately allow paths to become congested in order to improve their position when negotiating interconnection agreements. Our interconnection model captures the necessary complexity of today's ecosytem more faithfully than those appearing in the literature. We also show a useful duality between this model and the recursive layering model.

Our third case study is a futuristic scenario under active research today -- an entirely new Internet architecture that more naturally supports emerging patterns of communication. Information-Centric Networking is an architecture research area gaining momentum around the world for its potential to improve efficiency, scalability, robustness, and capabilities in challenging communication environments. The regulatory challenge of ICNs is that they embed content management into the routers of the network layer, thus further blending content and transport. We will use this case study to test the generality of our models, and offer some initial thoughts on the regulatory implications were ICNs to be deployed by facilities owners.


Saturday September 28, 2013 4:10pm - 4:45pm
GMUSL Room 120

4:10pm

How Do Limitations in Spectrum Fungibility Impact Spectrum Trading?
Download Paper

Markets in naked spectrum have been discussed since Coase first suggested market-based spectrum allocation in 1959. While the initial attention was paid to primary markets (i.e., spectrum auctions) researchers and policymakers soon realized the limitations of primary markets in the absence of secondary markets. In time, regulators relaxed policies so that broker-based secondary sales emerged. Broker-based markets lack the price transparency and liquidity of exchange-based markets, so researchers have considered these as well.

Regardless of type, most analyses assume that the spectrum bands being traded are fungible. However, outside of limited circumstances, this assumption is difficult to defend in practice. Data from auctions shows that different prices obtain for different bands, arguably due to different physical properties. Fungibility is even less defensible as one considers paired vs. unpaired spectrum and differences in regulatory rules associated with different bands.

We extend to consider the impact of fungibility limitations on the liquidity of the market for naked spectrum (i.e., spectrum license trading). To do this, we modify the Agent-based Computational Economics (ACE) based SPECTRAD model developed in to allow us to study how deviations in the fungibility assumptions aspects the liquidity of trading markets. In particular, we consider how the coverage differences associated with different frequency bands aspects the substitutability of the bands, which, in turn impacts the liquidity of the underlying markets. Some examples of this include the AWS-1 band and the 1755-1850MHz band that is currently being considered for sharing, or, alternately, the 2300MHz band). We will also consider how differences in quality (i.e., signal-to-noise ratio) aspects the liquidity of spectrum markets. An example of this kind of effect is the 700MHz A block compared with the B block.

While it seems certain that decreases in fungibility will result in decreases in liquidity, it is unclear what the magnitude of those are for particular spectrum trading scenarios. For instance, we will examine at which frequency offset liquidity decreases significantly. Finally, the overall goal of this paper is to (1) understand the role of fungibility limitations on spectrum markets and (2) provide policymakers aiming to craft rules for spectrum markets with a quantitative insight on what promotes liquid markets. We expect that an practical consequence of this work could be the development of more focused spectrum markets rather than a single, general purpose market as has been modeled previously.

References

[1] Mark Bykowsky. A secondary market for the trading of spectrum: promoting market liquidity. Telecommunications Policy, 27(7):533-541, 2003.

[2] Carlos Caicedo and Martin B.H. Weiss. The viability of spectrum trading markets. IEEE Communications Magazine, 43(3):46{52, 2011.

[3] R. H. Coase. The federal communications commission. Journal of Law and Economics, 2:1-40, 1959.

[4] A. Kerans, Dat Vo, P. Conder, and S. Krusevac. Pricing of spectrum based on physical criteria. In New Frontiers in Dynamic Spectrum Access Networks (DySPAN), 2011 IEEE Symposium on, pages 223-230, 2011.

[5] J. W. Mayo and S. Wallsten. Enabling efficient wireless communications: The role of secondary spectrum markets. Information Economics and Policy, 22(1):61-72, 2010.

[6] M. Weiss, P. Krishnamurthy, L.E. Doyle, and K. Pelechrinis. When is electromagnetic spectrum fungible? In IEEE Symposium on New Frontiers inDynamic Spectrum Access Networks (DySPAN), 2012.

Moderators
Speakers
avatar for Marcela Gomez

Marcela Gomez

PhD Student - Telecommunications and Networking Program, University of Pittsburgh - School of Information Sciences
avatar for Martin Weiss

Martin Weiss

Associate Dean, University of Pittsburgh


Saturday September 28, 2013 4:10pm - 4:45pm
Founders Hall 111

4:45pm

WiFi As A Commercial Service
Download Paper

While Wi-Fi has enjoyed explosive growth and deployment for use in residential homes, the rollout of commercial Wi-Fi service has been more limited. Part of the holdback on large-scale commercial deployment has been the strategic concern that the commons model to spectrum management lacks the incentives for service providers to invest due to the limited ability to manage interference in the unlicensed band. Today, however, this situation is changing. Joining already significant Wi-Fi deployments by mobile operators, large cable operators committed last year to the nationwide deployment of over 100,000 Wi-Fi hotspots.

This paper will answer two important questions raised by the growing interest in Wi-Fi as a commercial service: 1. Why is there growing confidence in Wi-Fi as a commercial wireless platform despite its unlicensed status? 2. What are the technical and policy implications of a significant commercial Wi-Fi presence?

To answer the first question, the paper will argue that the reason for growing confidence in Wi-Fi as a commercial platform is due to the activities of the Wi-Fi Alliance and IEEE 802.11 standards group. The paper will review these activities such as Passpoint 2.0 and then compare them to the role of a traditional spectrum manager. It is anticipated that this review will demonstrate that a large number of existing features of 802.11 maps closely to the functions traditionally employed by an effective band manager that is optimizing efficiency on a licensed spectrum block. The gap analysis between 802.11 features and band manager roles may also identify new areas of focus for the Wi-Fi Alliance and IEEE 802.11 to continue this role.

To answer the second question we will examine how the requirements for commercial Wi-Fi are different than personal Wi-Fi, and may diverge over time. Amongst the hotspots affiliated with its service, commercial service providers will need to manage interference, and monitor and improve network performance. The paper will discuss the current ideas under discussion in 802.11 for the next version of Wi-Fi such as transmit power control, additional 5 GHz spectrum, frequency planning, beamforming, and whether they will meet these new commercial requirements. The answer to this question is key, as a failure to address commercial requirements could lead to oversaturation of the 5.x GHz band in addition to the already congested 2.4 GHz band (i.e., the tragedy of the commons).

Moderators
Speakers
DR

David Reed

University of Colorado at Boulder, University of Colorado
Dr. David Reed is the Faculty Director for the Interdisciplinary Telecommunications Program at the University of Colorado at Boulder. He also leads the new Center for Broadband Engineering and Economics that specializes in the interdisciplinary research of the emerging broadband ecosystem, and is Senior Fellow, Silicon Flatirons Center for Law, Technology, and Entrepreneurship at the University of Colorado. | | Dr. Reed was the Chief... Read More →


Saturday September 28, 2013 4:45pm - 5:20pm
GMUSL Room 221

4:45pm

Competition Versus Regulation in the Post-Sunset PSTN
Download Paper

This paper proposes a new framework to address competition, consumer protection and public interest concerns in a ?post-sunset? PSTN broadband ecosystem? (BE). In the BE, enterprises are connected horizontally and vertically. Envisioning those enterprises as points within a three-dimensional lattice, it models a way to balance promotion of competition with a range of policies of governance in the absence of competition. This is a response to the current regulators? dilemma that companies like Google, Facebook, Amazon, Apple, Microsoft and others are now in many of the same markets as companies which were once primarily carriers such as AT&T, Verizon, Time Warner and Comcast, which are now rapidly diversifying and fleeing into an unregulated all-IP mode.



The FCC?s National Broadband Plan (NBP) envisions ubiquitous broadband access in the U.S., accomplished by a transition from traditional telephone technology ? analog circuits, TDM switches, related infrastructure components ? to an Internet Protocol (IP)-based national broadband network. However, the NBP does not specify a specific migration path from the old network to the new one, leaving critical questions of technology, business and regulation unanswered. The FCC and the market are now in a race to see which will answer them first.



This transition is being ?forced? by old TDM equipment reaching end-of-life and by large numbers of customers migrating away from traditional wireline voice communications and substituting mobile, VoIP and other alternatives. As the subscriber base declines, but the cost of maintaining the old network is fixed (or increasing), the cost per customer rises and profitability decreases, creating a voice ?death spiral?. The major telecommunications carriers are already rapidly distancing themselves from the ?old? telecommunications service and moving to diversified IP-based services.



In the technical area there are transition questions about numbering, interconnection and interoperability, quality of service, and spectrum among others. From the business perspective, traditional carriers are faced with finding new business models to function in what can be described as a three-dimensional lattice that comprises the metastructure of the BE. From a regulatory perspective, there is a fundamental challenge as to whether the FCC has any jurisdiction over IP-based services at all under current rules. A long-term solution to these problems requires a new way of thinking about the structure of the market as implemented consistent with the NBP.



The FCC has initiated a process to consider this, within the presumed scope of its current jurisdiction, although it may ultimately require Congress or the courts to define that scope. At the same time, the major carriers such as AT&T, Verizon, Time Warner and Comcast have initiated an aggressive campaign to have all IP-based services deregulated as ?information services?. Arrayed against this are civil society/public interest organizations devoted to sustaining the traditional ?social contract? with respect to communications in the public interest.



The ultimate outcome is uncertain, but it appears neither side has the political influence to win a total victory. Given that, some policy experts are proposing a ?middle way? in which the broadband network evolves against a set of general principles to assure competition and protect the public interest. This could involve an ex ante anti-trust/consumer protection approach, or a light-touch version of traditional regulation. However, there is as yet no coherent theoretical framework within which to decide what action (or forbearance) is appropriate under what conditions. This paper?s three-dimensional lattice model is a step in that direction.

Moderators
Speakers

Saturday September 28, 2013 4:45pm - 5:20pm
GMUSL Room 225

4:45pm

Framing the Value of Internet Exchange Participation
Download Paper

Internet eXchanges (IXes) are developed to localize traffic, reduce connectivity costs, and reduce latency. Historically, transit savings were sufficiently substantive to justify investment in IX interconnection strategies, overshadowing other benefits enjoyed by IX participants.. Recent work argues these basic IX effects facilitate regional economic development. In developed regions, as transit prices converge with comparable per volume price on Internet exchanges, a new question emerges: given this convergence, what additional value and benefits does the IX provide existing and would-be participants? Governance and organization studies of IXes provide some of the nuance necessary to answer this question. Based on interviews with IX managers and participants, there is value in a more fluid market of interconnection options that facilitate investment deferral and greater control over the repurposing of interconnection resources. Interconnection is presented in terms of platforms and how they implement interconnection options, highlighting how the mechanics of these implementation frame network actors’ decisions regarding how to maximize access to and investments in interconnection markets.

Semi-structured interviews of a wide range of network operators and IX operators over the last year and a half imply decision heuristics for optimizing value and network resilience: selectively increasing redundancy, increasing unique interconnection partners, and reducing switching costs, are among the most common criteria. This work distills decision heuristics identified thus far into a partially parameterized model of interconnection decision making across platforms that serve as markets for interconnection options. Unpacking decision heuristics contributes to more precisely explaining the mix of interconnection options available to actors that both derive value from network- and application-level services but who also influence infrastructure investment and strategy decisions through the options they select. This work demonstrates how IX-platforms facilitate greater access to interconnection options and defer bargaining and measurement costs necessary for more specific asset investments. IX-enabled interconnection contributes to the “flattening” of the Internet by providing an investment path to more sophisticated bundles of (flatter) bilateral relations rather than participating solely in the transit hierarchy. The model developed here provides a clear formalism for comparing these.

Two classes of parameters are missing: connectivity cost information and parameters representing actor-specific valuations of system properties such as redundancy or latency. A key contribution is to understand the options available by carefully specifying the parameter space based on empirical observation: what are the variables and how are they conceptually related? This work unpacks the mechanisms implementing two common types of interconnection option and concludes with working hypotheses that will be further refined and tested in ongoing work. The next stage of this work focuses on identifying appropriate operationalizations, data sources for cost, and strategies for comparing (validating) theoretical valuations based on empirical operationalizations of variables such as redundancy.

A social science approach to indicator development is applied to frame heuristics as background concepts that are refined into specifications of incentives, relationships, and potential indicators. Variables specified here represent possible systematizations of background concepts elicited from interviews. Neither the specification nor the relationships are presented as being writ in stone— rather, they are a part of an ongoing project to elicit empirical trends in the interconnection industry. The variety of network actors that engage with IXes, their value propositions, and the means by which they connect to IX platforms provide insight into which interconnection options are appealing to which types of actors as well as common trade-offs. Qualitatively, this gives insight into which types of actors are willing to invest in which mixes of infrastructure types and topologies. By highlighting trade-off conditions in the demand for interconnection, this specification provides direction for further identification of interconnection costs and the constraints by those making investment decisions.

The framework offers a number of contributions. Different interconnection modes are conceptualizes independently, but the framework subsequently highlights mutual dependence and benefits. Working hypotheses argue that different platforms structure and mediate interconnection option implementations (transport, colocation, IX), how they are interleaved with one another in practice, and offer a first pass at identifying and realizing critical paths through investment decisions as they grow and develop specific relationships (and asset investments) beyond simple transit. Building on the notion of a real option, flexibility in general purpose resources (IX-mediated interconnection) facilitates staged, dynamic investment decisions and learning effects.




Saturday September 28, 2013 4:45pm - 5:20pm
GMUSL Room 120

4:45pm

Search Engines after 'Google Spain': Internet@Liberty or Privacy@Peril?
Download Paper

The exponential growth of social media, and the Internet in general, has lead to a massive increase in the amount of personal data that can be found online. While disclosure of personal information is a logical byproduct of social activity, the extended longevity and broad accessibility of this information has given rise to specific reputational concerns. More and more, individuals are finding it difficult to "escape their past" or experience embarrassment when their personal information is taken out of context. Putting a stop to unwanted disclosures can be difficult however, particularly in the online environment. The entity responsible for the disclosure (or further dissemination) of personal data may be hard to find or reside in a different jurisdiction. Moreover, this person might invoke a right to freedom of expression; or brush aside defamation claims because the information is true. This is why - especially in the EU - Internet intermediaries (search engines, social networks, hosting platforms, etc.) are solicited more and more to help put a stop to the further dissemination of this data. Such requests place intermediaries in a very awkward position. They are, in effect, confronted with three sets - a triangle if you will - of competing interests: the privacy interests of the person requesting removal, the freedom of expression interests of the discloser/distributor, and the interests of the intermediary to remain exempt from any liability resulting from the dissemination of injurious content. A notice and takedown solution - similar to copyright enforcement - seems reasonable at first, but raises significant concerns when looked at more closely. Can (and if so, should) intermediaries be asked to make the balance between freedom of expression and privacy? If so, how should this balance be made? Can risks of "systematic take-down" be sufficiently mitigated to ensure freedom of expression is not unduly curtailed? These issues lie at the heart of Google v Spain (C-131/12), a case which is currently pending before the Court of Justice of the European Union (CJEU). The objective of this paper is to analyze, from a European perspective, the interaction between three branches of law, namely privacy, freedom of expression and the liability of intermediaries. Throughout this analysis, the Google v Spain case will serve as a frame of reference. Where relevant, comparisons will be made with the situation in the US, where most of these intermediaries are headquartered. It is planned that this paper will be finished around the time of the CJEU?s final judgment (late summer/early fall), which ideally could also be presented at TPRC. The eventual goal of the paper will be to provide a comprehensive overview of these issues and come up with clear policy recommendations on how to move forward.


Saturday September 28, 2013 4:45pm - 5:20pm
GMUSL Room 332

4:45pm

Can Unlicensed Bands Be Used by Unlicensed Usage?
Download Paper

Since their introduction, unlicensed ISM bands have resulted in a wide range of new wireless devices and services. The success of ISM resulted in the opening of the TV white space for unlicensed access. Further bands (e.g., 3550-3650 MHz) are being studied to support unlicensed access. Expansion of the unlicensed bands may well address one of the principle disadvantages of unlicensed frequency (variable quality of service) which could result in a vibrant new group companies providing innovative services and better prices. However, given that many commercial mobile telephone operators are relying heavily on the unlicensed bands to manage growth in data traffic through the ?offloading? strategy, the promise of these bands may be more limited than might otherwise be expected.

Wireless data traffic has exploded in the past several years due to more capable devices and faster network technologies. While there is some debate the trajectory of data growth, some notable reports include AT&T, which reported data growth of over 5000% from 2008 to 2010 and Cisco, who predicted that mobile data traffic will grow to 6.3 exabytes per month in average by 2015. Although the data traffic increased dramatically, relatively little new spectrum for mobile operators has come online in the last several years; further, the ?flat-rate? pricing strategy has led to declining Average Revenue Per User (ARPU). The challenge for mobile operators is how to satisfy the service demand with acceptable additional expenditures on infrastructure and spectrum utilization.

A common response to this challenge has been to offload data traffic onto unlicensed (usually WiFi) networks. This can be accomplished either by establishing infrastructure (WiFi hotspots) or to use existing private networks. This phenomenon leads to two potential risks for spectrum entrants: (1) the use of offloading may overwhelm unlicensed spectrum and leave little access opportunities for newcomers; (2) the intensity of the traffic may increase interference and degrade innovative services.

Consequently, open more unlicensed frequency bands alone may not necessarily lead to more unlicensed usage. The particular goals of this paper are: Quantifying unlicensed spectrum capacity under traffic offloading from primary mobile providers; Assessing the impact of additional bands on overall unlicensed capacity; Identify potential risks for unlicensed users in ISM and TV white space.

We will accomplish these goals by (1) adopting a Markov based model to calculate spectrum capacity for unlicensed users considering the potential traffic offloading from mobile providers and quantifying the extra capacity that will be given by additional spectrum; (2) building a real option based engineering economic model to illustrate the monetary risks in unlicensed frequency and also capturing the value of risk mitigation strategies. Since there are two unlicensed bands that are suitable for data communications, the capacity and economic model will be evaluated in two different cases. In case 1, mobile operators use WLAN in ISM bands, and in case 2 TV white space is chosen to provide wireless services.

[1] Summit Ridge Group, 'Could Incumbents Wireless Companies Also Dominate Unlicensed Spectrum?', Feb, 25, 2013.
[2] Hu, Liang, et al. "How Much Can Wi-Fi Offload? A Large-Scale Dense-Urban Indoor Deployment Study." Vehicular Technology Conference (VTC Spring), 2012 IEEE 75th. IEEE, 2012.

Moderators
Speakers
LC

Liu Cui

West Chester University
avatar for Martin Weiss

Martin Weiss

Associate Dean, University of Pittsburgh


Saturday September 28, 2013 4:45pm - 5:20pm
Founders Hall 111

5:20pm

Measuring Consumer Preferences for Video Content Provision Via Cord-Cutting Behavior
Download Paper

The convergence of video content provision is an ongoing process, and the path it follows will have wide-ranging effects in the American economy. Two recent events illustrate how the convergence is taking place: 1. The digital switchover and 2. The emergence of streaming network video content online. Consequently, in today?s telecommunications marketplace, consumers wishing to enjoy on-demand video content in their homes have three basic choices: Over-the-air (OTA) via digital antenna, paid subscription to cable or satellite, or online streaming (free or with a subscription).

In this paper, we aim to measure key determinants of consumers? choices across these video provision options. A question of particular interest is the following: Do consumers weigh video media characteristics (e.g., method of content delivery, content availability) when choosing among the three on-demand options, or is this decision primarily governed by financial means and Internet savvy? Related questions of interest include: Did the digital switchover affect demand for cable/satellite, and if so, along what dimensions? Which demographics have responded to the recent availability of online streaming network video, and to what extent do their choices depend on content availability? What is the role of income shocks in consumers? choices across these three media options? Said succinctly, our questions measure the importance of income, content, and lead user effects, and distinguish between their importance for user behavior and supplier strategy. To our knowledge, this is the first paper to attempt to measure consumer preferences across video provision methods that include online streaming.

To address these questions, we employ a rich dataset provided by Forrester Research. The data consist of independent cross sectional surveys of tens of thousands of American households each year from 1998 to 2010. These surveys collect information on technological purchases and preferences, as well as a wide range of demographic information (income, education, etc.) and location. We focus our analysis on the last few years of the survey, when the aforementioned shifts in the video content provision market occurred in the United States.

Our econometric analysis focuses on pay television dropping behavior, and how this behavior was influenced by changes in the telecommunications landscape. Our baseline model uses a binary variable indicating subscription to pay television in period t (our initial analysis will focus on 2008 or 2009) as the dependent variable. It looks as follows: TV_it= ?_0 ?_1*TV_(it-1) ?_2*X_i ?_3*I(t=2009) ?_4*I(t=2009)*TV_(it-1) ?_5*Content(i1t-1) ? ?_(k 4)*Content_(ikt-1) ?_(k 5)*I(t=2009)*Content_(i1t-1) ? ?_(2k 4)*I(t=2009)*Content_(ikt-1) ?_it.

Here, Xi is a set of time-invariant household characteristics for household i (e.g., education, location, family size, and income), I(2009) is an indicator variable for the second wave of data, and Contentijt-1 is a binary variable indicating whether household i viewed pay television content j at time t-1.

Using this model and estimation methods suitable for repeated cross-sectional data (as in Prince and Greenstein, 2013), we will be able to identify the change in overall pay television dropping rates (between 2008 & 2009) via the interaction term. Further we can identify the effects of prior content choice for both years. The structural shift in on-demand video provision (through newly available streaming content via broadband Internet and the digital switchover in over-the-air TV) suggests that prior content choice will have a differing impact on pay TV dropping behavior in 2009 as compared to 2008. This is because these two changes had differing implications as to the availability of content through alternative means. For example, the digital switchover primarily impacted the quality of network television, and local stations, available over the air; in contrast, the increase in streaming capability allowed for access to a subset of ?cable channels,? such as Comedy Central, much more easily over the Internet.

We believe our analysis will bring to light new knowledge about: how consumers compare different types of on-demand video content provision, key drivers of substitution across the main types of provision, and the demographic subgroups most responsive to changes in these types of provision.

Moderators
Speakers

Saturday September 28, 2013 5:20pm - 5:50pm
GMUSL Room 221

5:20pm

The Evolution of the Generalized Differentiated Services Architecture and the Changing Role of the Internet Engineering Task Force
Download Paper

Due to the transition from narrowband access to broadband access, heterogeneous requirements for traffic qualities become increasingly important taking into account the necessities for prioritization of data packets and quality of service guarantees. A transition to active network management and subsequently to a more ?intelligent? Internet traffic architecture is required. In meantime, the role of the IETF changed from developing and enforcing a universal connectivity TCP traffic infrastructure to a platform for dealing with the increasing need for variety in the design of different traffic management infrastructures and the continuous search for new technological solutions.

Quality of service differentiation standard tracks was already initiated under the heading of Integrated Services/Resource Reservation Protocol (IntServ/RSVP) and Differentiated Services (DiffServ) architectures nearly fifteen years ago. The analysis of this standard setting process during the last fifteen years shows that neither DiffServ architecture nor IntServ/RSVP has reached the final status of Internet standard. Moreover, important developments of network management design for DiffServ networks did not even reach standard track status, but are considered to be on the non-standard track. This includes the Configuration Guidelines for DiffServ Service Classes. Moreover, Procedures for Modifying the RSVP reached the status of Best Current Practice for the Internet community.

From the perspective of fostering the evolutionary search for innovative traffic management architectures the increasing role of optional (non-obligatory) non-standard tracks for quality of service differentiated network management should not be considered as a weakness but as a strength of the IETF. As a standard setting agency the IETF cannot take over entrepreneurial decisions regarding the choice of traffic management investment and the required allocation mechanisms for fulfilling the heterogeneous requirements for traffic qualities and implementing incentive compatible pricing mechanisms. In contrast to the technical neutrality of TCP, market driven network neutrality requires an entrepreneurial search for quality of service differentiation avoiding incentives for Internet traffic service providers to discriminate between possible network applications on the basis of network capacity requirements.

Therefore, the traditional market split of telecommunications providers into specialized services based on active traffic management for the provision of high quality of service levels (e.g. VoIP, IPTV) and passive (TCP-based) best effort Internet is unlikely to remain stable within the IP-based next generation networks. A forward looking economic model of active traffic management is provided within a generalized DiffServ architecture based on congestion pricing and quality of service differentiation. Within this ?umbrella? architecture for traffic management, a flexible framework for different traffic quality differentiation strategies is provided. Since its basic characteristic is that all applications are bearing the opportunity costs of the required traffic capacities the traditional differentiation between managed services and other IP-based Internet services becomes obsolete. In order to guarantee the high quality of VoIP or IPTV, top quality classes can be introduced using the principle of resource reservation with guaranteed end-to-end control. For applications which are less delay sensitive but still require some active traffic management lower traffic quality classes are sufficient, for those applications which are not delay sensitive a ?best effort? transmission class may be introduced. In order to provide incentive compatible quality of service differentiation within a generalized DiffServ architecture transmission charges must be monotone increasing with the highest quality class paying the highest transmission charges, and the ?best effort? class may be provided for free.

Moderators
Speakers

Saturday September 28, 2013 5:20pm - 5:50pm
GMUSL Room 120

5:20pm

5:20pm

The Economics of Spectrum Sharing
Download Paper

In light of the growing demand for wireless broadband spectrum, significant attention is now being devoted to more effectively utilizing federal allocations of spectrum, either by entirely repurposing for commercial use or sharing with commercial users. Spectrum sharing is believed to be one way to allow commercial users to access a band without incurring the costs of completely clearing existing federal users. While sharing avoids costs of clearing federal users, spectrum sharing has its own costs and is likely to impact the value of the shared spectrum. The purpose of this paper is to understand the economic tradeoffs associated with various spectrum sharing arrangements.

This paper will begin with a brief description of current policy issues around sharing and review potential frameworks for making spectrum allocation decisions. We posit that efficient spectrum allocation decisions should maximize the likelihood of achieving the policymakers? goals. Moreover, whatever the goals of policymakers, economically efficient use of spectrum creates value for achieving those goals. Consequently, efficient spectrum use should only be sacrificed for explicit policy objectives that are considered more valuable than the forgone value of using the spectrum efficiently. In trying to achieve efficient spectrum management, or in evaluating proposed departures from efficient use of spectrum, it is crucial to know the costs and forgone opportunity associated with any allocation policy. This is especially true when evaluating spectrum sharing proposals, because any specific proposal inevitably involves a trade-off between costs and benefits of two or more competing users.

To understand this tradeoff we will discuss the theoretical drivers of commercial spectrum value. The value of spectrum is essentially derived from the inherent profitability of wireless services deployed on the spectrum. In the case of shared spectrum, the total value of the spectrum is the sum of the value to each shared use. To the extent that sharing restricts the operations or increases the costs of deployment for a given user, it diminishes that users profitability. Only if the sharing arrangement increases the value to another user by a greater amount will the total value of the uses of a band of spectrum increase.

Based on these principles of spectrum value, the next section will discuss how spectrum sharing will likely impact the value of spectrum to a single user and the cumulative value to all users. If the total value of the spectrum for all shared uses is less than the value for a single user, then spectrum sharing diminishes the potential value of the spectrum; likewise, when the total shared value is higher than the single user value, sharing increases the spectrum value. We will explore mechanisms through which various types of spectrum sharing might impact the value of a single user, and the resulting cumulative value of the band to all users. We will use illustrative examples to illustrate this impact of sharing.

Likely examples include the geographic sharing proposed by NTIA for the 1675-1710 MHz band, the FCC?s proposed small cell sharing in the 3.5 GHz band, and the sharing between TV Broadcasters and unlicensed users in the TV white spaces. While geographic sharing of the 1675-1710 MHz band could limit costs to federal reallocation, it would also diminish the economic value of the spectrum by eliminating revenues in the exclusion zones, reducing any nationwide premium on the spectrum, and potentially increasing costs to deploying wireless broadband services. To evaluate the relative value of sharing proposals in the 3.5 GHz and TV Broadcast bands, we must compare the total potential value from all shared users to the value that a single commercial user could derive from this spectrum.

Moderators
Speakers
avatar for Coleman Bazelon

Coleman Bazelon

Principal, The Brattle Group
GM

Giulia McHenry

Associate, The Brattle Group


Saturday September 28, 2013 5:20pm - 5:50pm
Founders Hall 111

7:00pm

On Free Speech in an Electronic Age: Evaluating Ithiel de Sola Pool's 'Technologies of Freedom' after 30 Years
This year marks the 30th anniversary of the publication of Ithiel de Sola Pool's final and seminal book, "Technologies of Freedom." It's opening words, as current today as then, are "Civil liberty functions today in a changing technological context."

Pool was forward thinking in his capacity for understanding how the forces and trends in information technologies evident in the early 1980s might play themselves out over the decades. To commemorate Pool's work and to introduce it to those who may have missed it, we will dedicate this session to an informal discussion with outstanding scholars who worked with Pool or who have been influenced by his work. They will pursue Pool's admonition that "the norms that govern information and communication will be even more crucial than in the past."

Moderators
BC

Ben Compaine

Columbia Institute for Tele-Information

Speakers

Saturday September 28, 2013 7:00pm - 8:00pm
Multi purpose Room
 
Sunday, September 29
 

9:00am

Who Owns the World's Media?
Download Paper

This paper summarizes the status of a multi-year, 30-country study of global media market structure. Going beyond previously presented results, this paper analyzes drivers of such trends and projects their implications for future policy. It also compares US media concentration with that of other developed countries, and how market size and market concentration are related.

Moderators
Speakers
EN

Eli Noam

Columbia University


Sunday September 29, 2013 9:00am - 9:35am
GMUSL Room 221

9:00am

Understanding the Impact of Policy, Regulation and Governance on Mobile Broadband Diffusion in the Developing World
Download Paper

The 2012 edition of the World Bank?s Information and Communications for Development report urges countries to increase mobile broadband diffusion in order to improve the state of agriculture, health, financial services, employment and government in the developing world (World Bank, 2012). The main goal of our research is to identify public policy alternatives, regulatory measures, and governing practices that increase the adoption of mobile broadband in developing countries, which have largely adopted a "mobile first" strategy for on-line services. In achieving this goal, we show that different models are necessary to understand mobile broadband diffusion in the developed and developing worlds.

Past cross-national research that examined broadband diffusion has been limited mainly to OECD countries (e.g., Lee, Marcu & Lee, 2011). Yet, findings from these studies have been generalized to apply to developed and developing nations alike. This study examines the impact of policy, regulation and governance on mobile broadband diffusion in 131 countries. In the OECD countries, there is greater diffusion of mobile broadband in countries that make a higher financial investment in information and communication technologies (ICTs) and that practice more effective governance. The degree of competitiveness of the telecommunications sector and more specific measures of governance (i.e., the Telecommunications Regulatory Governance Index proposed by Waverman & Koutroumpis (2011)) do not affect mobile broadband diffusion. In developing countries, national wealth matters a great deal, but greater competition in the telecommunications sector, higher financial investment in ICTs, and telecommunications regulatory governance also significantly increase mobile broadband diffusion.

This research also extends previous findings that connect bridging information and communication divides with political structure (Norris, 2001) and institutional governance (Levy & Spiller, 1996). While we confirm that these factors do not affect mobile diffusion in developed countries, they are important factors in developing countries. The broader regulatory environment and how government is practiced (i.e., governance) are important variables, however, for understanding outcomes such as widespread access to new ICTs in developed countries.

In sum, we found that public policy initiatives, democratic institutions and effective regulatory governance matter, and can mitigate, to some extent, the advantages enjoyed by most developed countries. While we present strong evidence that national policy, regulation and governance are all important, we also show that resources in terms of wealth (in developing countries) and education (in developed countries) matter more for leveraging the benefits of mobile technologies for digital inclusion. Thus, we also show the limits that policy initiatives and related factors have in bridging the mobile broadband divide, at least in the short-term. Finally, it is important for policymakers to be aware and to understand that the path to widespread adoption of mobile broadband requires different strategies depending on a nation?s level of economic, political and institutional development.

Note: This empirical research is based on multiple regression analysis of several models that combine and separate OECD and non-OECD countries. The variables associated with each country are derived from publically available data reported by the United Nations, World Bank, ITU, OECD, CIA World Fact Book, and also Waverman & Koutroumpis (2011, Telecommunications Policy) and Pemstein, Meserve & Melton (2010, Political Analysis). Some of these variables are used as-is whereas others are aggregated into indeces that we have developed, for example, to measure telecommunications competition and financial investment in ICTs. This is original research that has not been presented at or published in an alternative outlet.

Speakers
avatar for Jeff Gulati

Jeff Gulati

Bentley University, Bentley University
My research interests are in legislative studies and representation, campaign & elections, and telecommunications policy. I also teach classes on American politics, Congress, lobbying & government relations, and research methods. See publications on my website. I have written on all the topics above and also on media coverage of human trafficking and lots of articles on social media and congressional elections. I currently am focusing on the... Read More →


Sunday September 29, 2013 9:00am - 9:35am
GMUSL Room 225

9:00am

Net Neutrality, Network Investment and Content Quality
Download Paper

The increasing development of the Internet network gave rise to a huge regulatory debate over the past few years. The most important issue is certainly on the neutrality of the Internet and how its impacts on the incentives for network operators and content providers to invest - both in network infrastructures and quality of services. The debate over net neutrality raises many questions about how relationships between network operators and content providers should be organized, mainly in terms of pricing and quality of access to broadband transmission services (Schuett (2010) gives a recent overview of these issues). One of the main questions is the condition under which regulators should allow network operators to adopt traffic management practices to avoid congestion and ensure sufficient quality of service to content providers for offering their services. Recently, in September 2011, the FCC released the final net neutrality rules for preserving an open Internet and stressed the need for transparency in network management practices and reasonable discrimination in transmitting network traffic. In Europe, the Commission is due to give its recommendations to ensure an open Internet in late 2012.

In Europe, related to the debate on the net neutrality the regulatory debate focuses on the investment in Next Generation Networks (NGNs). The question is how to give incentive to the network operators to invest in new communication infrastructures and then to migrate from the copper network (the old technology) to the fiber network (the new technology). Economic literature focused on this topic has mainly analyzed the impact of the access pricing rules on the incentives of operators to roll-out new infrastructures. How to manage the coexistence of the old and the new technologies is certainly the main issue for National Regulatory Authorities (NRAs). The interplay between investment and access price has already been studied in the economics literature, for instance, by Brito, Pereira and Vareda (2010), Vareda & Hoernig (2010) and more recently by Bourreau, Cambini and Dogan (2011) and Bourreau, Cambini and Hoernig (2012) in models that introduced directly the issue of the technological migration and the balancing effects of the network access price on the incentives to invest in the new technology.

Contributions closest to ours have been those that have modeled the key impact of network neutrality on the investment of the network operators. The first rigorous theoretical analysis of net neutrality can be found in Economides and Tag (2007) . Using a two-sided framework, this paper analyzes a model where network operators can charge content providers for traffic termination to consumers. They show that net neutrality, viewed as a no access fees regime, can greatly improve the consumer surplus but they do not consider investment by network operators nor innovation by content providers.

Economides and Hermalin (2012) assume a limited bandwidth allocated between content providers and look at the ISP's incentive to invest in more bandwidth. Cheng et al. (2011), Choi and Kim (2010), and Kramer and Wiewiorra (2012) study a model of queuing theory to model congestion on the Internet. They show that priority pricing can be welfare enhancing in the short-run and can increase the ISP's incentive to invest in network infrastructure. Recently, Reggiani and Valletti (2012) explicitly introduce a dynamic model with a large content provider competing with providers on the fringe. Bourreau et al. (2011) model competition between ISPs and study investment decisions in network capacity. While all these contributions give interesting insights into the impact of net neutrality on network operators and content providers behaviors, none of these papers explicitly analyze the interplay between the investment decision of the network operators and content quality offered by content providers. Yet, there exists a strong relationship between both that should be taking into account when analyzing the potential impacts of net neutrality on infrastructure investment of network operators.

The aim of this paper is to contribute to the question of investment in the NGNs by focusing on the interplay between infrastructure investment from the network operators and the quality of content, and examine the potential impact of the discriminatory regime. Usually, Internet broadband is viewed as two-sided market consisting of consumers on one side and content providers on the other. The interplay between both sides includes the way prices are set. However, the consumers? willingness to pay to access the network depends, not only on the number of content providers, but also on the quality of contents offered. On the other hand, incentives network operators to invest crucially depend on how they can price the access to the new technology on both sides of the platform. As consumer willingness to pay is usually an increasing function of the quality of content, the network operator's incentive to invest should be potentially even stronger when the quality of content is high. That is certainly a part of the mechanisms that can increase the incentive to invest in a new technology for the network operator. The reverse is certainly true, and the incentive to upgrade the quality of content for the content providers should be also increasing with the quality of the network infrastructure.

The setting of the model is the following. We assume that a network operator provides access to consumers and content providers. The network operator offers two access technologies, an old technology (copper) and a new technology (NGN). Content providers sell both a basic content and a premium content depending on the network technology consumers subscribe. We consider two market segments: one in which the network operator only offers the old technology (copper), and the other in which both technologies are offered. The network operator can invest to increase its market coverage with the new technology. We show that a marginal network investment can be beneficial for content providers and increase the consumer surplus, and examine the impacts of the discriminatory regime. We also state that content quality produces contrasted effects in the investment from the network operator depending on how high the consumer valuation for premium content is compared to basic content and the substitutability between both technologies. Finally, high content quality can give incentives to the network operator to invest more in the new technology, and then create a greater positive effect of the discriminatory regime.

Moderators
Speakers

Sunday September 29, 2013 9:00am - 9:35am
GMUSL Room 120

9:00am

Evaluating Data Breach Notification Laws - What Do the Numbers Tell Us?
Download Paper

Security and data privacy threats are rapidly emerging as one of the critical legal and economic issues for regulators. One area of regulatory attention has been the introduction of mandatory disclosure policies after a security breach in certain economic sectors. Most recently this global trend has also gained momentum in the new policies of the European Union.

This paper aims to set the basis for a comprehensive investigation of information disclosure as a policy strategy for data protection. The main objective is twofold: first, the paper develops a conceptual model to study the effectiveness of data breach notification laws (DBNL) which will support the feasibility of tailored analysis. The model captures the main causal relations around DBNL and the actors associated with them (government, companies, community, media). A proper evaluation of the effectiveness of the DBNL will be made possible not only by analyzing the number of notified security breaches over time, but more precisely by enabling the assessment of effects directly related to the behavior of single actors and their interdependencies with the system they belong to. They include economic, legal, crime and response effects(1). The model will indicate concrete and measurable proxies for casual relation measurements.

The second objective is to study empirically the relationship between specific DBNL characteristics and the number of reported data breaches. The conceptual model will be used as reference in order to empirically analyze the effects of 46 DBNLs implemented in the US based on the different state characteristics in terms of DBNLs, actors, data breach events. The analysis will be relevant not only for the American context but also for other regions, above all for the EU, given the growing attention of the European Commission for data security and transparency in cases of data breaches. Through this examination the paper will test the hypothesis that implementation of DBNL in short-term shows a high impact on decreasing the number of notified breaches and the related effects, but in mid/long-term, the changing context and the actors behavior drive this impact to lose its significance, in absence of any countermeasure such as ad hoc law amendments.

The analysis will be performed starting from a descriptive statistics supported by the availability of a database including more than 3.500 data breaches in USA made public since 2005. The breaches and organization types are classified in different categories, enabling therefore not only analysis at state level but also at sectoral level (e.g., healthcare) and breach category (e.g., hacking). The model incorporates as a control-variable the GDP produced by each sector in the different states. The data necessary to perform the analysis are: DBNL characteristics (e.g., definitions, notification-timing, penalties) drawn from the legislation in each state, notified data breaches by state, collecting single breach information from relevant databases (2), state economic and technological properties via reports of the US Economic Census Bureau and the Bureau of Economic Analysis. Additional information on effects such as identity theft is available from FTC and other publications.

The research is innovative in proposing a comprehensive model and in linking law characteristics, economic and sectoral state properties and data breaches. To my knowledge this is the first paper to perform such an empirical analysis. Earlier work has focused on specific relationships in the model, such as estimation of market impact of breach announcements, mandatory disclosure effects considering the correlation of data breaches and identity theft, and empirical analysis of data breach litigation at the state level.

Moderators
Speakers
avatar for Fabio Bisogni

Fabio Bisogni

TU Delft / FORMIT Foundation


Sunday September 29, 2013 9:00am - 9:35am
GMUSL Room 332

9:00am

A Quantitative Analysis on the Impact of Spectrum Access Policy Flexibility and DSA System Capability on Spectrum Sharing Potential
Download Paper

While Dynamic Spectrum Access (DSA) is premised upon the existence of technologies and policies that enable flexible access to spectrum, a quantitative understanding of the coupling among DSA technologies, policy definition, and spectrum sharing potential remains largely unexplored. Over ten years of technology research, development, and prototype testing has resulted in algorithms and system concept advances, including the ability to deduce policy constraints (and permissions) in situ. Knowing the policy, however, is different than selecting operating parameters (e.g., transmission power) that lead to policy compliance (e.g., interference mitigation). The primary hurdle to in situ compliance is situational awareness uncertainty, which results from the inherently stochastic nature of wireless communications and spectrum user activity.

Situational uncertainty is currently mitigated by policy specifications that impose universal operating constraints and limit efficient spectrum sharing. For example, TV Band Device policies were derived in a manner similar to policies for non-DSA systems. Specifically, numerous models of hypothetical scenarios ? including plausible ?worst-case? conditions ? were developed and analyzed. Judgments were made as to the mechanisms (e.g., sensing, GPS-aided geolocation) to be used for situational awareness as well as their corresponding capabilities (e.g., detector sensitivity, location accuracy). Those factors were then specified in regulatory policy as universal constraints on device design, awareness mechanisms, and performance limitations. The result is limited spectrum sharing ability in the majority of operating conditions in an effort to ensure risk mitigation of significant impacts associated with rare situations.

This paper asserts and tests the hypothesis that improved spectrum sharing is possible by incorporating in situ uncertainty evaluations in DSA situational awareness and developing flexible policy specifications that regulate DSA behavior as a function of awareness. The assertions are based upon preliminary results of ongoing research into the relationship between DSA situational awareness uncertainty and potential DSA performance as measured by capacity gain, interference mitigation, monetary cost, and other metrics. A DSA situational awareness and decision model is developed using probabilistic Structural Causal Modeling (SCM) and multiattribute decision theory. The probabilistic SCM provides a formal system for representing cause and effect of DSA system operations and is built upon well-established engineering models for wireless communications. The multiattribute decision model provides a formal basis for in situ DSA decision-making that combines user goals and situational uncertainty. The model can also be used by engineers and policy-makers for evaluating the impact of policies, DSA awareness mechanisms, and user preferences on spectrum sharing and DSA system performance.

The paper provides an overview of the model but focuses on quantitative evaluations of spectrum sharing and DSA system performance as a function of policy flexibility and DSA system capabilities. Characteristics of the various model components and their significance are presented through a discussion of model development. The impact of spectrum policy on spectrum sharing and DSA system performance are quantified in the use cases in terms of capacity and interference.

Preliminary results collected to date indicate a clear gain in spectrum sharing as a function of increased policy flexibility and increased DSA system situational awareness. Policies that define specific DSA mechanisms or design constraints provide the least amount of policy flexibility and spectrum sharing potential; they similarly limit the design options available to system developers. Policies that define the desired effect (e.g., capacity goal) or impact limitations (e.g., maximum interference temperature) afford the greatest flexibility to a DSA system and thus the greatest potential for spectrum sharing. A DSA system?s ability to use the spectrum sharing potential is then a function of the specific situation and the DSA system?s degree of situational awareness uncertainty. The final paper will detail these findings as well as findings from ongoing research in the coming months.

Moderators
JC

John Chapin

Program Manager, DARPA

Speakers
avatar for Todd Martin

Todd Martin

George Mason University


Sunday September 29, 2013 9:00am - 9:35am
GMUSL Room 329

9:35am

Globalization, Indigenous Innovation and National Strategy: Comparing China and India's Wireless Standardization
Download Paper

This comparative case study investigates the contrasting approaches adopted to 3G standard setting by two leading developing countries, China and India. Despite facing comparable challenges in terms of meeting consumer demand and fostering industrial development, the two countries adopted adopted sharply different standards strategies, with China investing heavily in a homegrown TD-SCDMA standard, and India allowing 3G licensees to provide service utilizing any type of international standard. In this paper, we trace the antecedents of this divergence to the different policy approaches the two countries adopted to their respective electronics industries in the early to mid-1990s. We demonstrate that the standard-setting policies of the two countries are embedded in their trade strategies, innovation policies and manufacturing sector policies.

Though standards play a critical role in the development of telecommunications markets, there is no consensus in the theoretical literature or empirical literature about the most advantageous standard-setting strategies, especially in developing markets. While emerging economies have utilized indigenous standards to catch up with their more technologically advanced competitors and build their national innovation systems, national standards also run the risk of lock-in into an inferior standard, and discouraging technological innovation in the long run. Market-based standard-setting on the other hand may initially slow the growth of markets; deprive manufacturers of potential scale economies; and also disadvantage domestic firms in competition with better-established international players. Contrasting approaches to standard-setting in China and India, both striving to become the next technological powerhouse, provide an ideal case to compare different models of standard-setting and technological innovation.

However, we argue in this paper that solely the merits and demerits of competing standard-setting and innovation models are insufficient to understand the contrasting paths to 3G standard-setting adopted in China and India. Instead, we also focus on the industry structure of the domestic electronics manufacturing industries in the two countries. This industry structure is the result of the liberalization process initiated in China in the 1980s, and in India roughly a decade later. These policies and subsequent growth resulted in an electronics manufacturing sector in China many times larger than in India. The Chinese sector is also more concentrated, marked by a few large manufacturing firms closely affiliated to the state through ownership, management and industrial policy. In India in contrast, late liberalization resulted in a much smaller electronics manufacturing sector composed of in general smaller firms. Also, Indian electronic firms tend to specialize in one or two electronics manufacturing sectors unlike Chinese firms with their multi-sector expertise.

When mobile standardization became a policy priority in the two countries with market expansion, the choice menu in terms of standard-setting strategy was constrained by among other factors, the prevailing structure of the electronics industry. In China, we argue that the close affiliation between the dominant mobile providers and the state, as well as the presence of large manufacturing firms, made a national standard both feasible and attractive. In contrast, India?s manufacturing sector is at least a decade behind China?s, and its contract manufacturing experience is more limited than China?s. The larger number of players made standards consensus more difficult to achieve, and the limited market share provided less scope for scale economies. India decided to allow its 3G licensees to provide service by utilizing any international standard. Though theory suggests that open standards competition will give greater incentive to innovation, China appears to be winning the patents race as well, because the larger size of Chinese manufacturing firms provides greater resources for R&D investments.

In conclusion, we summarize how each country?s 3G standard-setting strategy is derivative of and constrained by its past trade strategy, innovation policies, and manufacturing sector policies. Though China appears to have secured the advantage at the present time, it is possible that standards competition in India might provide greater scope for innovation going forward, provided India is able to devote greater resources to R&D. We also speculate on the relative merits of each country?s strategy to promote innovation in a fast-growing telecommunications industry.

Moderators
Speakers
KJ

Krishna Jayakar

Penn State University


Sunday September 29, 2013 9:35am - 10:10am
GMUSL Room 221

9:35am

In the Eye of the Beholder: The Role of Needs-Based Assessment in IP Address Market Transfers
Download Paper

Objective: Prior work by the authors empirically describes the secondary market for IPv4 address resources (Mueller, Kuerbis and Asghari 2012). Initial results indicated that the Regional Internet Registries? (RIRs?) needs assessment policies introduce significant friction in the IPv4 secondary market and may be inconsistent with organizations? demand and planning time horizons. The objective of this study is to extend that prior work by finding out whether number blocks are being traded in an ?underground? manner in order to avoid RIRs? needs assessments, and also whether when the numbers sold in the secondary market that do go through needs assessments are immediately put to use, as the needs assessment is intended to enforce. This will provide greater insight to the role of administrative needs assessment policies in the IP address transfer market.



Hypotheses: We hypothesize that address blocks acquired in the secondary market after needs assessments are not necessarily more likely to be utilized (i.e., publicly routed). Such a finding would suggest that needs assessments retard the secondary market without bringing any compensating benefits such as reducing hoarding. Current policy debates in the RIRs are considering extending the time horizon of, or eliminating entirely, needs based assessment for transferred resources. This is similar to what occurred in spectrum secondary market in US, with the FCC putting aside cumbersome administrative evaluation processes in favor of expediting transfers.



We also hypothesize that some address blocks may be transferred under the table in order to avoid going through the RIRs needs assessments processes. Number blocks could be traded as 'leases' without involving a formal transfer that involved altering the RIRs? records. Our method (described below) could also detect changes in routing that are not reflected in changes in the RIRs? Whois records. If a significant amount of hidden transfers are taking place, it suggests that needs assessment policies not only do not serve a positive function, but create harm by providing incentives to bypass accurate registration records.



Method: By observing changes in the Internet?s routing topology archives and relating it to transfers data, the authors will create and analyze a new dataset describing the extent to which transferred address resources are being utilized in the Internet. Using a script, we loop over the legacy IP address space, checking the Autonomous System Number associated with tradable prefix blocks in every month of the period of study. The underlying routing data for these lookups comes from the Route-Views archives. The method employed is new and the subject matter (IPv4 secondary market) is relatively unexplored.

Speakers

Sunday September 29, 2013 9:35am - 10:10am
GMUSL Room 225

9:35am

Deep Packet Inspection: Effects of Regulation on its Deployment by Internet Providers
Download Paper

Deep Packet Inspection (DPI) has been the subject of heated policy debates. This paper examines theoretically and empirically patterns of DPI adoption during the past five years. A range of uses can drive ISPs to deploy DPI in their networks. What is less understood is the extent to which government policies encourage or discourage DPI adoption by ISPs. We explore those forces and find evidence that regulatory frameworks exert considerable influence on the adoption of DPI.



Method: Our conceptual framework looks at DPI as a technology with potential costs and benefits for an ISP. The decision to deploy DPI is therefore modelled as a technology investment decision within national legal and institutional conditions. Our empirical analysis is based on a crowd-sourced online test named Glasnost, hosted by the Measurement Lab. Users run Glasnost to detect whether their ISP uses DPI to throttle BitTorrent traffic. We built a dataset based on around 250,000 tests completed within a five year period (2008-2012). For each test, we looked at the IP addresses of the users, identifying their country, and the ISP that connects them to the Internet. We then calculated a score for the extent to which DPI was used at the level of each ISP and in each country. This method resulted in DPI-scores for 75 countries and 288 broadband ISPs, allowing us to describe patterns of adoption. To test conjectures derived from our conceptual analyses, we added variables reflecting the market and governmental forces relevant to the countries and ISPs. These were drawn from a variety of sources, including the ITU, Worldbank, TeleGeography, Privacy-International, and the OpenNet-Initiative. Because not all information was available for all 75 countries we method resulted in an unbalanced panel.



Preliminary Findings: An examination of the data revealed that in 2011, around half of all broadband operators worldwide made noticeable or pervasive use of DPI for bandwidth management purposes. This figure is high in light of the public and regulatory unease over the use of these technologies. It suggests that from the ISPs? perspective, the costs of DPI are often outweighed by its benefits, most notably cost savings on bandwidth and postponing infrastructure expansion. Using an integrated modelling approach, we hypothesized that all other things being equal, stringent privacy regulation would discourage the use of DPI, whereas strong social censorship policies would encourage it. There are also countries where neither is present. In those cases, we hypothesized that the ISPs own incentives dominate adoption. The countries for which we had the relevant data were grouped across two axes: privacy protection (PP) and online social censorship (SC). This resulted in three groupings: countries with a hands-off approach to the telecom market (low PP, low SC), a group with a push towards privacy (high PP, low SC), and those with a push towards censorship (low PP, high SC). We then looked at the relationship between groups and country-level DPI scores, where higher scores indicate more use of DPI.



Our finding is that ISPs in the hands-off group score on average 0.19, the privacy-push group 0.11, and the censorship-push group 0.36. These differences are statistically significant and in line with our theoretical expectations. Another interesting angle is opened by looking at the ISP-level scores. The country-level score is weighted based on ISP size; if we use the unweighted scores, the group averages change to 0.17, 0.15 and 0.32 respectively. The differences are in the same direction, but become less profound. This suggests that that the regulatory frameworks of privacy and censorship have stronger impact on the decisions of the larger ISPs. Presumably, they are under higher scrutiny. We conclude the paper with a discussion of policy implications.

Moderators
Speakers
avatar for Johannes M. Bauer

Johannes M. Bauer

Professor and Chairperson, Michigan State University
I am a researcher, writer and teacher interested in the digital economy, its governance as a complex adaptive systems, and the effects of the wide diffusion of mediated communications on society. Much of my work is international and comparative in scope. Therefore, I have great interest in policies adopted elsewhere and the experience with different models of governance. However, I am most passionate about discussing the human condition more... Read More →
avatar for Milton Mueller

Milton Mueller

Professor, Georgia Institute of Technology
(TBC) Milton Mueller is Professor at the School of Public Policy, Georgia Institute of Technology, USA. Mueller received the Ph.D. from the University of Pennsylvania’s Annenberg School in 1989. His research focuses on rights, institutions and global governance in communication and information industries. He is the author of two seminal books on Internet governance, Ruling the Root and Networks and States. Mueller was one of the founders of... Read More →


Sunday September 29, 2013 9:35am - 10:10am
GMUSL Room 120

9:35am

Ex Ante vs. Ex Post: Economically Efficient Sanctioning Regimes for Online Risks
Download Paper

When and how should we sanction network (service) providers, software vendors, or application services provider to mitigate the harm of security and privacy risks? Here we apply an economic framework that compares two sanctioning regimes: ex ante and ex post. Specifically, we introduce, translate, and apply the model by Garoupa et al. for security and privacy risks online.



We identify under which conditions the different sanctions are economically efficient. We argue that for well known security risks, such as botnets, the economically efficient solution would be ex-ante sanctions. Simultaneously, privacy risks, which are contextual, poorly understood, new, and whose distribution across demographics would be difficult to estimate, should be managed through ex-post sanctions. To the extent that providers are judgment-proof, the sanctions can be non-monetary, e.g. reputation-based. Finally, resource allocation is suboptimal when privacy risks are treated disjointly from security risks. Thus, the relative merits of either security or privacy investmenta should take into account the opportunity cost of mitigating the other. Finally, we provide an analysis of existing policy measures with the case study of Do Not Track and botnet takedowns



We address two kinds of regulatory regimes for sanctions: 1) ex ante and 2) ex post. Ex Ante, or action-based sanctions, is a regime that prohibits specific actions. For example, for environmental risks it may be illegal to store industrial waste in a container above a certain volume, below a specific tensile strength etc. In automobile safety ex ante. Regulation manifests as speed limits, where it is considered too dangerous for individuals to drive above a certain limit. Thus, ex ante sanctions are action dependent. Online these sanctions are part of policy initiatives like Do Not Track. (Arguably, there is no direct financial sanction to those who do not comply. However, indirect sanctions through reputation loss are equally relevant.)



Ex post, or harm-based regulation, sanctions after the fact. For example, instead of mandating a specific kind of container to store industrial waste, the government might decide that the respective industries can make better decisions. However, if specific companies are lax and there is a spill, that company would be required to provide for damages. Thus, ex post sanctions are harm dependent. If a potentially hazardous activity does not have any negative consequences, there are no sanctions. Online these sanctions manifest as FTC enforcement against Google for privacy breaches due to Buzz.



Currently both sanctioning regimes are being used to develop public policy responses to security and privacy risks online. These sanctions are being enforced by agencies such as Federal Bureau of Investigation (FBI). Often, however, agencies such as the Federal Trade Commission (FTC), traditionally enforcement agencies, are also being tasked with informing and creating policy. The actions of both these agencies, in enforcement as well as policy, have been controversial. FBI takedowns of botnets such at Nitol have been criticized for their collateral damage. FTC initiatives, such as Do Not Track, similarly have been under attack from both who are privacy advocates and those who prefer the free market approach.



It is unclear under what conditions each of these types of sanctions are economically sensible. Given that both ex ante and ex post sanctions are being used, which are more effective for security and privacy risks online?



We begin to answer this question by using an economic framework that compares the effectiveness of the two distinct regimes of ex ante and ex post sanctions. This research is based on previous work by Garoupa et al. We being by introducing the general model. We then extend this model by considering an inequitable distribution of risk. We analyze existing policies using the economic framework being considered. We discuss the broader scope of sanctions and the implications of policy. We conclude with specific insights for enforcement agencies.

Moderators
Speakers

Sunday September 29, 2013 9:35am - 10:10am
GMUSL Room 332

9:35am

Spectrum Markets and Sharing Via Spectrum Consumption Models
Download Paper

The continuous development of new technologies and uses for wireless radio spectrum has prompted spectrum management agencies and wireless service providers to consider flexible spectrum assignment mechanisms as a means to be able to respond in the near future to the changing spectrum management landscape. The regulatory, technological and economic changes that are now driving the wireless services industry will spawn new technical and business models in wireless service provision, many of which will rely on Dynamic Spectrum Access (DSA) methods to enable efficient use of spectrum resources. However, the use of DSA methods also requires using policy-based mechanisms in radio devices to facilitate and control the assignment of spectrum given the wide range of communication scenarios in which there may be conflicting goals for the use of this resource (e.g. public safety vs. profit-based services).

Spectrum consumption modeling attempts to capture spectral, spatial, and temporal consumption
of spectrum of any specific transmitter, receiver, system, or collection of systems. The information contained in the models enable better spectrum management practices and allows for the identification of spectrum reuse opportunities. The characteristics and structure of spectrum consumption models are being standardized within the newly formed IEEE P1900.5.2 group in which the authors participate.

This paper presents and discusses our research in establishing a techno-economic framework for the use of SCM to enable spectrum sharing markets. We focus on exchange based spectrum trading markets where the entities wanting to use spectrum resources (spectrum users) make use of SCMs to express the characteristics of their desired spectrum use and based on them the exchange can determine the range of frequencies within the service area that can satisfy the user and the charge that it should pay. Several charge determination methods and criteria are also explained.

Spectrum consumption models (SCM) allow for a fine grained management of spectrum resources in a spectrum market. We make use of agent-based modeling methods to illustrate the use of SCM in spectrum trading market scenarios and the potential resource efficiency and economic gains obtained when compared with more traditional (less granular) resource assignment approaches. Some of the practical limitations of SCM based spectrum trading are also discussed.

Finally, spectrum resource sharing via SCM and its integration with Policy-based spectrum management is also covered as a means to discuss the use of SCM in the context of standardization efforts ongoing in IEEE and the WinForum in order to present a set of future possible spectrum management scenarios where Quality of Service, spectrum efficiency, public vs. private spectrum use (i.e. safety vs. commercial) goals and interests can be met. The impact of regulation is also discussed.

We hope that the results and insights of this paper are of use to regulators and policy makers and that it provides them an initial exposure to the potential uses of Spectrum Consumption Models and policy-based spectrum management.

Moderators
JC

John Chapin

Program Manager, DARPA

Speakers
CC

Carlos Caicedo

Associate Professor, Syracuse University, School of Information Studies


Sunday September 29, 2013 9:35am - 10:10am
GMUSL Room 329

10:10am

Characterizing and Comparing the Evolution of the Major Global Players in Information and Communications Technologies
Download Paper

Introduction: In this paper, we characterize and compare, using most recent official macro-economic data, the evolution of the Information and Communications Technologies (ICT) industry of the most important global players in ICT: the USA, Japan, China, the European Union (EU), Korea and Taiwan. The ICT industry includes IT and telecom hardware manufacturers, telecom operators and software and computer service firms. It provides technologies and solutions necessary for the development of the digital economy and society. This analysis is particularly relevant for policy makers since the ICT industry and ICT-enabled innovation make an increasingly important contribution to economic growth. Our research is part of the policy support provided by JRC-IPTS to the European Commission, in particular to its Communications Networks, Content and Technology Directorate General.

Approach and Data: We first identify the most important global players in ICT by comparing the economic weight of their ICT industry and their volume of Business Expenditures in R&D (BERD). We then analyze the specialization and strengths of these economies in ICT manufacturing and services. We conclude by examining main trends and policy implications.

The data used in this research (value added, employment and BERD) was collected from official sources including OECD, US Bureau of Economic Analysis, NSF, METI, EUROSTAT, National Bureau of Statistics of China, Statistics Korea, National Statistics of Taiwan. For patents we use EPO PATSTAT data. Our analysis covers four years, from 2006 to 2009, a period that ended with the recent economic crisis. 2009 is the most recent year for which official data is currently available.

We followed the most recent definition of the ICT sector adopted by the OECD in 2007. In order to obtain comparable data from 2006 to 2009, we elaborated a transition methodology from the previous to the current definition. We developed correspondence tables to account for different ICT sector definitions in the analyzed countries. Data collection and methodological work was conducted jointly with the Valencian Institute of Economic Research (IVIE). Our data sets and methodological reports and notes are publicly available on the JRC-IPTS web site.

The novelty of this research consists in providing a multi-year comparison of the ICT industry and its R&D by manufacturing and services sectors across six large world economies, relying on official data.

Results overview: Our initial results indicate that the USA, Japan, China, the EU, Korea and Taiwan, are the six world economies that contribute most to global ICT production and R&D. Among those, Taiwan, Korea and Japan are the most ?ICT-specialized? economies. The ICT industry of Taiwan and Korea is more specialized in manufacturing, while it is strongly oriented toward services in the EU, the USA and Japan.

In absolute values, the top world ICT manufacturing economies are the USA, China and Japan, while the economies that invest most in ICT manufacturing business R&D are the USA, Japan and the EU. Interestingly, China still has (in 2009) a rather low level of ICT manufacturing business R&D investment, in spite of its ICT manufacturing production being as important as that of the USA. One possible reason could be that a large proportion of ICT R&D may take place in public organizations. We plan to investigate this further in our paper.

In ICT services, the USA is also the world leader, followed by the EU.
There is clear evidence of a strong recent increase of ICT R&D in China. In 2008, China became second after Japan in ICT patenting, while it was sixth in 2000; and it approached in 2009 the level of ICT BERD of Korea.

These initial results also provide substantial insights on the role of these economies in global ICT production networks.

Moderators
Speakers

Sunday September 29, 2013 10:10am - 10:45am
GMUSL Room 221

10:10am

Fixing Frand: A Pseudo-Pool Approach to Standards-Based Patent Licensing
Download Paper

Technical interoperability standards are critical elements of the modern telecommunications infrastructure. Most such standards are developed in so-called voluntary standards-development organizations (SDOs) that require participants to license patents essential to the standard on terms that are "fair, reasonable and non-discriminatory" (FRAND). FRAND commitments are thought to avoid the problem of patent hold-up: the imposition of excessive royalty demands after a standard has been widely adopted in the market. While, at first blush, FRAND commitments seem to assure product vendors that patents will not obstruct the manufacture and sale of standards-compliant products, in reality these commitments are vague and unreliable. Moreover, they have proven ineffective to address the problem of patent stacking, which occurs when multiple patent holders assert rights in, and demand royalties on, the same standard. The recent surge of litigation in the smart phone and other technology sectors, much of which concerns the interpretation and enforcement of FRAND commitments, has brought these issues to the attention of regulators, industry, and the public, and many agree that a better approach to FRAND is needed. 

In this paper, I propose a novel solution to the SDO FRAND problem that borrows from the related field of patent pools. In patent pools, multiple patent holders agree to charge a single, collective royalty on patents included in the pool. This structure, which has been utilized in connection with several successful industry standards, allows market participants to manufacture and sell standards-compliant products with a high degree of certainty regarding their aggregate royalty burden. While the cost and administrative overhead of patent pools may make them inapposite for the majority of standards developed in the SDO setting, salient features of pools can be adapted for use in SDOs under what I term a "pseudo-pool" approach. 

The pseudo-pool approach includes the following principal elements: (a) SDO participants must declare patents in good faith, (b) SDO working groups that include patent holders and potential vendors are encouraged to establish aggregate royalty rates for standards, (c) patent holders continue to grant licenses on FRAND terms, subject to the aggregate royalty agreement, (d) each patent holder is entitled to a share of the aggregate royalty based on a proportionality measure, (e) there is a defined penalty for over-declaration of patents, (f) each patent holder is permitted to license its patents independently of the pseudo-pooling arrangement, and (g) patent holders can opt out of the collective royalty structure if they so choose. This proposal encourages joint negotiation of royalty rates prior to lock-in of a standard, conduct that has been viewed with approval by several regulatory agencies and acknowledged as offering various procompetitive benefits. It is hoped that the proposed structure will eliminate the current uncertainty surrounding royalty levels on standardized products, while at the same time addressing the related issue of patent royalty stacking.

Speakers
JC

Jorge Contreras

Associate Professor of Law, American University


Sunday September 29, 2013 10:10am - 10:45am
GMUSL Room 225

10:10am

Reporting Policies of ISPs: Do General Terms and Conditions (GTCs) Match with the Reality?
Download Paper

Technological progress allows Internet Service Providers (ISPs) to carry out network management practices in a discriminatory fashion without being detected by their customers. This creates the risk that providers will exploit this information asymmetry in an opportunistic way by blocking and/or throttling certain services and applications without informing their customers in an adequate fashion that their Internet service is a restricted one.

Against this background, empirical studies based on M-Lab data (Glasnost Test) provide evidence that some ISPs in the countries examined (Germany, France, Italy and USA) deployed DPI mechanisms for the given time period in order to discriminate P2P BitTorrent applications. The investigations showed that some large ISPs were deliberately throttling or in some cases completely blocking BitTorrent traffic (even at times of low network loads). Bearing this in mind and also the fact that an average Internet user is usually not aware of the common network management practices on the part of her/his ISP, the paper at hand examines the extent to which reporting policies ? as implemented through the General Terms and Conditions (GTCs) ? of selected European and US ISPs (both cable and telecom service providers) match with the display of discriminatory behavior by the very same ISPs. Hence, we intend to answer the questions whether European and US access providers inform their subscribers about the on-going (adverse) network management operations and whether there exists sufficient transparency regarding network management practices in Europe and USA.

Moreover, in order to analyze GTCs, signed between ISPs and their subscribers, we apply a cross-provider/cross-country approach. By conducting such an approach, we are seeking to find any similarities/differences in the reporting policies amongst ISPs in general but also between cable and DSL-based operators on a national or international level ? especially between Europe and USA concerning their opposite regulatory situations in terms of net neutrality.

The study represents a semi-quantitative research concept to the extent that we investigate the GTCs of the selected providers based on some general assumptions and parameters previously defined, and compare these findings with the results of the empirical results gathered from previous research projects.

Due to the fact that GTCs represent a fundamental contractual agreement between an ISP and its subscribers by defining all the duties and obligations both parties have to adhere to, they are also an important factor upon which end-users can make an informed decision concerning a particular ISP. Hence, having a precise and accurate reporting system clearly stating all the bandwidth management activities a specific ISP conducts, will certainly improve both transparency and competition on the Internet access markets. This also aligns with the general EU and US approach towards net neutrality which stipulates a promotion of competition and transparency in the field of telecommunications.

First results show that no significant evidence of contractual transparency regarding the actual adverse deployment of Deep Packet Inspection (DPI) and similar traffic management tools by the ISPs could be found. This supports our plea to promote transparency regarding ISPs´ traffic management by ?forcing? providers to make more precise statements in terms of their general business practices.

For policy makers, this is yet another factor that needs to be taken into account regarding potential regulatory interventions in this field and an explicit network neutrality stipulation may be required in order to safeguard the openness of the Internet and protect consumers´ basic rights of information and well-grounded choice.


Sunday September 29, 2013 10:10am - 10:45am
GMUSL Room 120

10:10am

Can We Can Spam? A Comparison of National Spam Regulations
Download Paper

This paper examines the effectiveness of various spam regulations in countries across the world, with a special emphasis on the CAN-SPAM Act in the United States, and compares their effectiveness. Our analysis is based on legal documents, court judgments, spam content, and international comparison. The conclusions identify problems with current national spam regulations and enforcement, and offer suggestions on the future architecture of international spam regulations that strike a balance between regulating spam while affording reasonable protections to commercial speech rights.



Ever since the birth of electronic mail around 1993, spam has always been an annoyance for email users, besides occupying significant network resources. So far, more than 35 countries in the world have promulgated legislations that restrict the use of spam email. The United States Congress established its first national standard ? the CAN-SPAM Act ? in 2003. Now, ten years later, it seems necessary to evaluate the effectiveness of the CAN-SPAM Act: Do email boxes see less spam at all? Unfortunately, a recent study, entitled "Internet Bad Neighborhoods," found that the United States was only one among three developed nations (along with Germany and Spain) to figure in the list of top-20 countries with the highest absolute and per capita number of spammers (Moreira Moura, 2013, see p. 81). This latest finding raised important questions about why the CAN-SPAM Act appears to have been ineffective in fulfilling its goals and why the United States still figures among the biggest spam generator in the world. Did the CAN-SPAM Act, by narrowly defining the prohibited categories and preempting prior state laws offering consumer protections against spam, actually evoke the spike of spam as Ford (2005) has argued? Why have other developed countries been more effective in controlling the incidence of spam than the United States? It is definitely beneficial at this point to compare the CAN-SPAM Act to spam regulations in other countries in dimensions like legal obligation, coverage, and scope.



The phenomenon of spam email has received considerable research attention from a variety of perspectives, including content analysis (Yu, 2011), spam?s harms (Goldman, 2004), technical solutions (Potashman, 2006), economics analysis (Khong, 2004), and legal analysis (Sorkin, 2003; Dickinson, 2003), to name a few recent articles. However, no study has been done to compare the different frameworks utilized by different countries to combat the spam issue.



This paper will fill this gap in the literature by comparatively analyzing the legal frameworks for spam in a set of countries, selected to be representative on level of development (advanced and developing economies) and the per capita incidence of spam (high and low). Specific provisions of national laws, such as definitions of spam, protections if any for commercial speech, prohibited and permitted categories, penalties if any, enforcement mechanisms, etc. will be compared based on legal documents and court judgments. In conjunction with data on the prevalence of spam originating from that country, conclusions will be drawn on the pros and cons of different spam regulations. The border-crossing nature of spam requires coordination between different countries, and potentially an international legal framework to protect all. By identifying the common features and differences in national spam regulations, as well as data-based determinations of the efficacy of national regulations on controlling spam, the paper will offer insights on how to better fight against the spam problem on a global level.



References:

Dickinson, D. (2004). An architecture for spam regulation. Federal Communications Law Journal, 57(1), 129?160.



Ford, R. A. (2005). Preemption of state spam laws by the federal CAN-SPAM Act. University of Chicago Law Review, 72, 355.



Goldman, E. (2003). Where?s the beef? Dissecting spam?s purported harms. The John Marshall Journal of Computer & Information Law, 22(1), 13.



Khong, D. W. K. (2004). An economic analysis of spam law. Erasmus Law and Economics Review, 1(1), 23?45.



Moreira Moura, G. C. (2013). Internet bad neighborhoods. (Doctoral dissertation). Retrieved from CTIT Ph.D.-thesis Series No. 12-237.



Potashman, M. (2006). International spam regulation & enforcement: recommendations following the World Summit on the Information Society. Boston College International and Comparative Law Review, 29(2), 323.



Sorkin, D. E. (2003). Spam legislation in the United States. The John Marshall Journal of Computer & Information Law, 22(1), 3.



Yu, S. (2011). Email spam and the CAN-SPAM Act: A qualitative analysis. International Journal of Cyber Criminology, 5(1), 715.

Moderators
Speakers
avatar for Snow Dong

Snow Dong

The Pennsylvania State University
KJ

Krishna Jayakar

Penn State University


Sunday September 29, 2013 10:10am - 10:45am
GMUSL Room 332

10:10am

The Role of Regulation in the Market: Analyzing Canada's Wireless Code of Conduct Hearings
Download Paper

This paper analyzes the ways that the terminology of the free market ? competition, consumer choice, economic incentives ? was mobilized selectively by Canada?s wireless service providers during the February 2013 Canadian Radio-television and Telecommunications Commission (CRTC) Wireless Code of Conduct hearing. This hearing was part of the Canadian regulator?s consultation to establish a mandatory code of conduct for Canada?s wireless industry, in response to increasing public discontent with ?the clarity and content of mobile wireless service contracts and related issues for consumers? (CRTC, 2012). Much of the public frustration was directed toward the ?big three? telecommunication companies that dominate the marketplace, sharing 90% of the country?s wireless subscriptions: Rogers, Bell, and Telus. The language of the public complaints tended to revolve around unfair business practices, price gouging, and poor customer service. In response, the telecom companies explained during the CRTC hearings that customers just don?t understand the economics of service provision and that Canadian wireless services are globally competitive.

Having attended the hearings and applied discourse analysis methods to the hearing transcripts, our data set has been approached from the hypothesis that free market language ? particularly around competition and consumer choice ? is frequently invoked because of its unquestioningly positive currency in both corporate and regulatory contexts. We analyze a number of key moments in the oral testimony of Rogers, Bell, and Telus, to examine how they specifically mobilized the language of the free market as a means of discouraging regulatory intervention. They talked about how Canada?s wireless services market was fiercely competitive, how consumers have a wide range of choices, how device subsidies function as economic incentives that help consumers to own the newest in mobile technology, and also how the implementation of a Wireless Code would pose operational challenges in this already competitive environment. But at the same time, each company?s representatives tried to support consumer empowerment to some extent, since it is a necessary corollary to their arguments for consumer choice. In maneuvering around this tricky rhetorical territory, we observed that in the less formal moments of discussion during the hearings, speakers often amended or even contradicted what they had said in the more formal part of their hearing presentations.

The contradictory ways in which each company tried to frame their consumers as empowered but simultaneously unable to cope with the changes required by the Code indicates that the companies mobilized free market language deliberately in order to minimize the impact of the Wireless Code on their existing business practices. We analyze this discursive strategy critically from the point of view of the ascendance of consumer as opposed to citizen framing, situated as part of a broader trend toward individualism in telecommunication regulation globally (Livingstone & Lunt, 2007; O?Neill, 2010). This paper offers a nuanced perspective on the discursive particularities of regulatory hearings in Canada, where speakers often drift between formal and informal modes of discussion that tend to reveal their underlying values and motivations. The paper concludes with some thoughts on how the analysis of this Canadian regulatory hearing can inform understanding of regulatory actions to balance citizen, consumer, and corporate interests in other jurisdictions.

References

Canadian Radio-television and Telecommunications Commission (CRTC) (2012)
Telecom Notice of Consultation CRTC 2012-557. [online] [Accessed 23 March 2013].

Livingstone, S. & Lunt, P. (2007) ?Representing citizens and consumers in media and
communications regulation,? The ANNALS of the American Academy of Political
and Social Science, 611, pp. 51-65.

O?Neill, B. (2010) ?Media literacy and communication rights: ethical individualism in the
new media environment,? International Communication Gazette, vol. 72, nos. 4/5,
pp. 323-338.

Moderators
JC

John Chapin

Program Manager, DARPA

Speakers
CM

Catherine Middleton

Canada Research Chair, Ryerson University
Ryerson University -


Sunday September 29, 2013 10:10am - 10:45am
GMUSL Room 329

11:10am

Search Concentration, Bias & Parochialism: Lessons from a Comparative Study of Google, Baidu & Jike's Search Results in China
Download Paper

Research Objectives, Importance & Novelty

Are search engines making the rich richer, the poor poorer by driving Web traffic to well-established sites while punishing the lesser known? Do search engines intentionally favor their own content while demoting others? How parochial or cosmopolitan are search engines in directing traffic to sites beyond the user?s national borders?

These questions are crucial not only to websites and users, but also to the well-being of the entire Internet ecosystem that has become search-centered. As information gateways, search engines play a central role in influencing user attention, directing web traffic, and arbitrating advertising dollars. Search giants have also become increasingly vertically integrated, functioning as search engine/advertising agency/ratings system simultaneously. Their status raises concerns over search quality, competition, and openness. The stakes are high.

Finding evidence to start answering the above questions, however, is difficult not least because search engines are complex and proprietary. This paper suggests carefully executed comparative information retrieval research can provide much needed empirical evidence to start probing questions of search concentration, bias and parochialism, particularly in international search markets like China (450 million search engine users, $900 million market size) where little to no independent research has been conducted on such critical issues.

Methodology, Data & Preliminary Results

Comparative search results research evaluates search quality and search engine properties by querying different search engines with a small sample of keywords to detect unique results patterns (e.g. results overlap, search ranking and filtering patterns). In this study, ?search concentration? refers to the degree to which search results are concentrated in a few dominant websites. ?Search bias? is defined as a combination of ?own-content bias? (favored inclusion and ranking of search company?s own content) and ?other-content bias? (exclusion from and lowered ranking of rivals? content). ?Search parochialism? denotes the degree to which search engines include results from overseas sites.

This study compares longitudinal search results data collected inside China in August 2011 and August 2012 from three search engines: Google, Baidu, and Jike (state-sponsored). A total of 35 keywords were used as the sample: 20 from a government-sanctioned report that ranked the top 20 Internet events in 2010 (e.g. Tencent vs. 360 dispute, Shanghai Expo, Foxconn suicides and etc.); 15 were general terms (e.g. transportation, medicine, news). Measures were taken to minimize search personalization and other variables (e.g. disabling cookies, Internet Explorer as default browser etc.). Data was gathered from the same location roughly a year apart with the same set of keywords. The webpages containing the first 10 search results (textual only) were saved, yielding a total of 2100 textual hyperlinks for analysis.

Preliminary analysis finds grounds for concern in the Chinese search market. A high percentage of search results (as much as nearly 50%) came from five or six top sites in China ? Baidu, Sina, Tencent, Sohu, 163.com and Phoenix (ifeng.com). Search concentration is more pronounced in Baidu and Jike than in Google. There was little change over time. Moreover, Baidu consistently referenced its own content significantly more often and ranked it consistently higher than the other two search engines would. For instance, Baidu referenced its own content 52 times in the 2012 data set, compared to 12 times in Google and 25 times in Jike. Baidu and Jike included few search results from overseas sites, while Google was more likely to do so.

Although the study?s sample size is small and the results are not easily generalizable, this project may serve as a valuable baseline for future research and makes a start in raising and tackling these important research questions in the Chinese search engine market.

Moderators
Speakers
avatar for Min Jiang

Min Jiang

Associate Professor, UNC Charlotte
I research and publish in the area of Chinese Internet technologies (search engines, microblogging, big data), Internet politics and policies.


Sunday September 29, 2013 11:10am - 11:45am
GMUSL Room 221

11:10am

A Unified Framework for Open Access Regulation of Telecommunications Infrastructure: Literature Review and Policy Guidelines
Download Paper

The concept of Open Access (OA) plays a central role in the ongoing academic and political debate on the appropriate regulatory framework for next-generation access networks (NGAN). In particular, OA is believed to provide a balance between static and dynamic efficiency, i.e., between the stimulation of competition and the encouragement of investment. However, clear policy conclusions on the effect of OA regulation were usually precluded by a fundamental lack in common understanding what actually defines an OA policy and along which dimensions OA regulation can be structured. For example, while OA has been used by American scholars to describe access obligations including price regulation, the European Commission?s understanding of OA refers to mandated access in the case of state aid and on the other end network operators have put emphasis on voluntary access. Again, some definitions state the vertical separation of upstream and downstream activity as a prerequisite while others are explicitly concerned with the application in the case of vertically integrated access providers.
In this paper, we reconcile these diverse views, by offering an integrative, universal definition of OA that is based on the central principle of non-discrimination. Moreover, based on this definition, we develop a conceptual framework by which OA endeavors can be uniquely identified. This allows us to classify, compare, and benchmark different concepts of OA that are discussed in the extant literature. More specifically, our conceptual framework is structured along the following dimensions: (1) Vertical structure of the access provider, i.e., whether the access provider is active in the downstream market or may represent a cooperative of multiple firms, (2) Business model of the access provider, i.e., the ownership structure and goals of the organization, and (3) Access level, i.e., the degree of quality differentiation that the access seeker can exploit.

By classifying the various applications of the OA concept along these dimensions we are able to relate the implications of an OA regime to the surrounding environment and the properties of the particular access relationship. In particular, in the context of this framework we survey the extant literature with regard to aspects of consumer and total welfare (static efficiency), investment and innovation (dynamic efficiency), as well as practical and legal issues (regulatory requirements) and find systematic trade-offs and policy conclusions.

For example, in the case of public-sector participation we conclude that mandated OA represents an effective instrument to minimize the crowding-out effect when applied to low access levels such as ducts. Local initiatives can reduce the negative effects of the central planner paradigm underlying traditional public-sector participation and are able to achieve economies of scope, but exhibit an inherent lack of scale. In the case of private ownership and vertical separation there is a generally negative effect on the coordination of investments between upstream and downstream segment, but we argue that this may play a minor role in the NGAN context. In the case of vertical integration OA may serve as an instrument to abstain from price regulation and rely solely on a margin squeeze test. The impact on facility-based competition occurs to be ambiguous, since there is now a lower incentive to duplicate infrastructure, but wholesale competition may be encouraged. Cooperative investment approaches pose a new challenge by raising the question how OA should be granted along the temporal dimension (ex-ante vs. ex-post).

While the main focus of this paper is on access networks, we finally also discuss the potential extensions of the OA framework to higher layers of the value chain. In a market where network operators are now competing with IP-based service providers, innovation is driven by integrated hardware and software eco-systems, and service platforms may become new bottlenecks for complementary services, questions surrounding OA are likely to include the application to non-physical infrastructure in the future.

We look forward to have this proposal considered for the TCRP Main Conference, however we do not participate in the Poster session.

Moderators
avatar for Karen Rose

Karen Rose

Internet Society
Karen has been active across Internet technology, policy, and development for nearly 20 years, including prior roles in Internet start-ups, government, and management consulting. She began her career in public policy working on Internet and e-commerce issues at the U.S. Federal Communications Commission. She later joined the National Telecommunications and Information Administration of the U.S. Department of Commerce where her work focused on the... Read More →

Speakers
avatar for Daniel Schnurr

Daniel Schnurr

Karlsruhe Institute of Technology


Sunday September 29, 2013 11:10am - 11:45am
GMUSL Room 225

11:10am

Public Computing Centers: Beyond 'Public' and 'Computing'
Download Paper

Overall broadband adoption has leveled off at just under 70% in the United States, and many studies have suggested that large access and use gaps separate urban/suburban and rural/metro areas. It also seems clear that certain population groups have less ability ? and interest ? in using broadband, contributing to another type of gap. These differences could create adverse economic and social consequences for those with constraints on Internet access and training. Previous studies have analyzed the digital divide issue in terms of differences in broadband availability, digital literacy levels, the effects on race and ethnicity, the influence of social class, and the impact of spatial location (LaRose, Gregg, Strover, Straubhaar, & Carpenter, 2007; Stevenson, 2009; Flamm, 2013), and various policy responses have attempted to redress the presumed inequities associated with such divides. This paper is based on current research examining the dynamics and efficacy of one such response, namely public computing centers (PCCs). According to NTIA, PCCs are considered ?projects to establish new public computer facilities or upgrade existing ones that provide broadband access to the general public or to specific vulnerable populations, such as low-income individuals, the unemployed, senior citizens, children, minorities, tribal communities and people with disabilities? (NTIA, 2011, p. 5). Such sites were a target for funding under NTIA?s Broadband Technology Opportunities Program, initiated in 2009. The goals of our investigation are to assess the current and future roles of the public computing center model in terms of user outcomes, inter-organizational dependencies, and the shifting challenges and opportunities offered by portable and increasingly less expensive technologies (mobile phones, tablets, laptops) in an environment with increasingly ubiquitous wifi, and 3- and 4G services.

Our study investigates the operations of one of the largest of the 65 Public Computing Center grants awarded by NTIA, headed by a non-profit collaboration that established and/or augmented 90 computing centers located in three cities and several rural sites in the state of Texas, an ethnically and culturally very diverse state. The PCCs themselves typically co-located in libraries, community centers, homeless shelters, low income residential complexes, and other targeted service operations. Our research focuses on two main aspects of the program: first, the organizational challenges of mounting an effort this large, which entails working with many nonprofit initiatives simultaneously; second, the outcomes for users of the public computing facility. We have gathered data on a subset of 15 sites in both urban and rural locations using both qualitative and quantitative techniques. Our quantitative data include usage analyses of computers at the sites and browser histories that convey a sense of the Internet-based resources used by the centers? clientele; the qualitative data are based on approximately 90 interviews with staff and users at various sites as well as observations at the sites. The data examine the public computing centers? roles in their respective communities, the differences between urban and rural sites, the inter-organizational dynamics that typify social efforts to respond to (and to characterize) the digital divide, users? profiles and reasons for using the centers, and the long term prospects of public computer centers in the face of both technological changes and shifts in the political and economic environments supportive to these efforts.

We are currently completing our fieldwork and anticipate finalizing a preliminary analysis by June, 2013. Some early findings probe the utility of the brick-and-mortar, desktop computer model of the typical PCC and suggest that policy makers should (1) reevaluate precisely what such centers should be expected to achieve; (2) address the unique challenges of giving children access to these sites; and (3) recognize the heterogeneous nature of the centers, and capitalize on the community- or target user-based aspect of the most successful locations.

References

Jayakar, K., & Park, E. (2012). Funding public computing centers: Balancing broadband availability and expected demand. Government Information Quarterly, 29(1), 50-59. doi: 10.1016/j.giq.2011.02.005
Flamm, Kenneth (2013), ?The determinants of disconnectedness: Understanding US Broadband Unavailability,? in R.D. Taylor and A.M. Schejter, Ed., Beyond Broadband Access: Developing Data-Based Information Policy Strategies. New York, NY: Fordham University Press.
LaRose, R., Gregg, J., Strover, S., Straubhaar, J., & Carpenter, S. (2007). Closing the rural broadband gap: Promoting adoption of the Internet in rural America. Telecommunications Policy, 31(6-7), 359-373. doi: 10.1016/j.telpol.2007.04.004
National Telecommunications & Information Administration [NTIA] (2011). American Recovery and Reinvestment Act of 2009. Retreived from: http://www.ntia.doc.gov/page/2011/american-recovery-and-reinvestment-act-2009
Peacock, A. (2012, September). Towards a more inclusive information society: A case study of a digital inclusion initiative in Jalisco, Mexico, Paper presented at the Telecommunications Policy Research Conference, Arlington, VA.
Schejter, A., & Martin, B. (2012, September). If you build it?Will they come? Understanding the information needs of users of BTOP funded broadband internet public computer centers. Paper presented at the Telecommunications Policy Research Conference, Arlington, VA.
Stevenson, S. (2009). Digital Divide: A Discursive Move Away from the Real Inequities. The Information Society, 25(1), 1-22. doi: 10.1080/01972240802587539
Strover, S. (2001). Rural internet connectivity. Telecommunications Policy, 25(5), 331-347. doi: 10.1016/s0308-5961(01)00008-8
Strover, S. (2009). America's Forgotten Challenge: Rural Access. In A. Schejter (Ed.), ...And communications for all: a policy agenda for a new administration (pp. 203-221). Lanham, MD: Lexington Books.
Warschauer, M. (2003). Technology and Social Inclusion : Rethinking the Digital Divide. Cambridge, Mass.: MIT Press.


Sunday September 29, 2013 11:10am - 11:45am
GMUSL Room 329

11:10am

The Whole Picture: Where America’s Broadband Networks Really Stand
Download Paper

Objective: The paper analyzes recent trends and dynamics that affect the relative availability, utilization, quality, and value of American broadband networks versus those in other OECD nations. Most of the literature on this subject was developed five to ten years ago, prior to the deployment of DOCSIS 3, LTE, VDSL, and the smartphone, and during a period in which fiber deployment proceeded at a relative slow rate in the U.S. due to overhang of the fiber bubble of the late 1990s. Consequently, many analysts have accepted a version of conventional wisdom according to which the United States offers ?second rate broadband? at high prices. While this was a defensible position in the late 2000s, the most recent data shows that it is no longer the case. If the United States is improving its position relative to OECD competitors on the important metrics, it follows that its policy framework is fundamentally sound.

Method: The paper examines data drawn from the most comprehensive and current sources: OECD surveys, FCC surveys and testing, the National Broadband Map, the Berkman Center analysis, ITU Surveys, Akamai, SamKnows, and Netindex performance data, and up-to-data industry analyst data on subscriber churn, fiber deployment, and provider profitability. It employs regression analysis to estimate the correlations between broadband adoption, price, and computer ownership as well as survey data to identify barriers to adoption. The paper develops trend lines on the relative ranking of the U. S. and OECD nations with respect to the key metrics.

Novelty: Traditional analysis of broadband performance tends to be error prone, as the method typically used combines two estimates: 1) the availability of service tiers throughout the nation; and 2) the distribution of customers across available plans. This method assumes that plans actually provide the level of performance suggested by advertising, and that consumers are aware of the current speeds advertised for the plans they choose. As measurement indicates that these assumptions are faulty in many nations, the paper pursues a different avenue and examines speeds on the basis of measurement data collected by Akamai, Netindex, and SamKnows. It finds the Akamai data most useful and suggests these data are a powerful resource for policy analysis that has largely been overlooked.

Results: The paper finds the downward trend in the adoption of high performance broadband plans and in overall broadband performance that characterized American broadband service in the late 2000s began to reverse in 2010. In one measurement, the U. S. ranked 22nd overall in the average speed of shared IP connections in Q4 2009, but ranked 8th by the same measurement in Q3 2012. It finds that the adoption of broadband by computer-owning American households exceeds the OECD mean, and that the deployment of DOCSIS 3 and LTE is stronger in the U. S. than in other nations.

The paper finds that the prime benefit of facilities-based competition is the creation of market dynamics in which service providers are able to compete on the basis of performance and coverage as well as price and customer service. It notes that the European Commission is developing a revised policy framework in order to create incentives for performance-based competition between providers.

Moderators
Speakers

Sunday September 29, 2013 11:10am - 11:45am
GMUSL Room 120

11:10am

Conceptualizing Communications Security: A Value Chain Approach
Download Paper

Cybersecurity has become a top priority for policymakers these days, but as the engineering saying goes: “if you don’t know what you want, it’s hard to do it right.” This paper finds considerable shortcomings in current conceptual and legal frameworks for communications security policymaking. The misleading concept of cybersecurity incorporates a wide range of social issues under its umbrella, such as child protection, foreign policy and intellectual property protection. Cybersecurity distracts communications security conceptualizations and policies from the technical conception that should be at their core, i.e. ensuring confidentiality, integrity and availability of communications to authorized entities. 

The paper develops a value chain approach for the conceptualization and legal governance of communications security. This value chain approach is informed by an empirical case study into HTTPS governance and multilateral security engineering methods SQUARE and MSRA. It offers a 9-step framework for granular, functional communications security conceptualizations, tailored to specific communications settings. The framework enables policymakers to devise technical security goals, apprise constitutional values, confront stakeholders interests and balance associated public and private interests. The value chain approach should assist policymakers in deciding what they want, and to know whether they are the appropriate actor to get it right. 

Moderators
Speakers

Sunday September 29, 2013 11:10am - 11:45am
GMUSL Room 332

11:45am

The Internet and Changes in Media Industry Structure: An International Comparative Approach
Download Paper

With the development of the Internet and online media, audiences and revenues are migrating from traditional offline to Internet-distributed online media. Several studies have shown that traditional offline media markets are shrinking with the expansion of the Internet. Some scholars detected a decline of advertising revenues of established individual delivery systems such as newspapers and music publishing, whereas others reported negative effects on aggregated advertising revenues.

While public attention has largely focused on the reduction of advertising revenues of established delivery systems, only a few studies empirically examined the overall transition of the media industry caused by Internet. In particular, one study reported current trends in the U.S. media industry: after U.S. Internet broadband penetration started to grow significantly in 1999, the aggregated revenue of the U.S. media industry steadily declined as a proportion of gross domestic product (GDP) and a marked shift from advertising toward direct payment occurred (Waterman & Ji, 2012).

In this study, we examine economic effects of the Internet on changes in the media industries from a broad perspective. Our goal is to empirically test whether the above two trends, previously found in the U.S. media industry, also apply in other countries. For this, we extended our scope to an international comparison of media industries. We examine whether (1) online media affected established traditional media in terms of aggregate revenues, and (2) the shift in the balance of advertising vs. direct payment support changed as Internet penetration grew.

Our primary methodology is to quantify trends in the size of the media industries as a proportion of overall economic activity, or GDP. The GDP metric is used to give comparative meaning to the size of the media industries as a proportion of overall economic activity of each country. In this manner, this study is in the tradition of studies in the field of information society which measure the size of the ?information economy,? proportional to total economic activity.

Based on industry and government sources, we constructed a country-level dynamic panel of 45 countries from 2005 to 2011. The data set includes the individual nation trends of media industries in terms of % GDP , country-specific economic status, Internet & broadband penetration, and the ICT Opportunity Index (ICT OI) measured by ITU. We empirically investigate the Internet effect on the overall transition of traditional media industries which include 10 individual media such as newspaper, broadcast TV, subscription TV, box office, home video, recorded music, radio, book, magazine, and video game industries.

Specifically, the dependent variable, the balance between advertising supported media revenue and direct payment media revenue (advertising media revenue divided by total media revenue) is regressed on a set of control variables, especially, Internet penetration (or ICT OI), and two lagged dependent variables. To control for autocorrelation, unobserved characteristics, and endogeneity, we estimate our results using the linear generalized method of moments (GMM) estimators (Arellano & Bond (1991); Bond (2002); Blundell & Bond (2000)), which allows us to use internal instruments.

Our preliminary analysis shows that 1) the development of the Internet may lead the shift of the balance from advertising, toward direct payment revenue, and 2) the Internet negatively affects aggregate revenue of established media industry in terms of % of GDP. The reasons behind those trends and policy implications will be discussed.


References

Arellano, M., & Bond, S. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations. The review of Economic studies, 58(2), 277-297.

Blundell, R., & Bond, S. (2000). GMM estimation with persistent panel data: an application to production functions. Econometric reviews, 19(3), 321-340.

Bond, S. R. (2002). Dynamic panel data models: a guide to micro data methods and practice. Portuguese Economic Journal, 1(2), 141-162.

Waterman, D., & Ji, S. W. (2012). Online Versus Offline in the United States: Are the Media Shrinking? The Information Society, 28(5), 285-303.

Moderators
Speakers

Sunday September 29, 2013 11:45am - 12:20pm
GMUSL Room 221

11:45am

Regulatory Policies in Relation to Metrics and Data Collection for Measuring the Emergent Internet
Download Paper

The Internet is currently in a major process of change and transformation. It is moving away from a basic model of layered architecture to a modular architecture (Garud, Kumaraswamy et al. 2003) (Clark 2004) (Fransman 2010) (Yoo 2010) with integrated provisioning of digital services and products to users. Furthermore traffic volumes and asymmetry of traffic information available for analysis makes it difficult to gain a full overview of and understand these changes (Liebenau, Elaluf-Calderwood et al. 2012) (Hallingby, Hartviksen et al. 2012). Hence studying the Internet as a whole is difficult, and there are many issues with data collection, with the academic and commercial literature providing plenty of references to such problems. The analysis is made even more complicated when trying to address medium and long-term sustainability of the telecom and Internet industries (Yoo 2012).



Value creation and capturing is a growing challenge to the Internet ecosystem stakeholders, seeking to re-innovate a sustainable system. Hence the Internet changes the actions of national and regional regulators. Regulators are normative acting on behalf of consumers and ensuring adequate investments in society critical infrastructure (FCC 2011). Their goals are to provide mediation using competition laws and rules as the recent French case Cogent vs. France Telecom shows (ARCEP 2012). This is particularly due to fast convergence of the Internet and telecom. The transforming state of the Internet has led many regulators around the world to make efforts to collect data for such regulatory purposes but with variable degree of success. Thus measuring the Internet remains a huge challenge, and we will suggest some ways forward in this paper.



Norway is a relatively small country ?in the world of the Internet? (Hallingby and Erdal 2011). However the size and other aspects of the Nordic culture (e.g. openness to accountability, sense of community at all levels of society) have created an environment in which the national regulator NPT has multiple sources of data (NPT 2012b), and also with correlated Internet data that are collected by diverse institutions. This has resulted in a clear and well explained ability to describe the Norwegian Internet (Hallingby, Erdal 2011). There is also a culture of regulatory pro-active engagement with changes to the earliest emerging of issues e.g. CDNs legal forms (NPT 2012a).



This article discusses possible type of metrics required to explain the link between the Internet network measures and the Internet economic variables. First of all we are describing the emerging Internet in Norway, also indicating a more generic change. More important for the purpose of this article, we believe these metrics are very valuable to companies, users, regulators and any other stakeholders. Specifically, we show the case of Norway as an example of what type of knowledge that may be developed, how these mappings can be performed, the scope and limitations of such methodology, and how it can be used by regulatory authorities to monitor but not obstruct the development of business activities. We also review the usefulness of this type of measurement in the context of a recent regulatory analysis of CDNs in Norway.

Moderators
avatar for Karen Rose

Karen Rose

Internet Society
Karen has been active across Internet technology, policy, and development for nearly 20 years, including prior roles in Internet start-ups, government, and management consulting. She began her career in public policy working on Internet and e-commerce issues at the U.S. Federal Communications Commission. She later joined the National Telecommunications and Information Administration of the U.S. Department of Commerce where her work focused on the... Read More →

Speakers

Sunday September 29, 2013 11:45am - 12:20pm
GMUSL Room 225

11:45am

Digital Haves and Have-Nots: Internet and Broadband Usage in Canada and the United States
Download Paper

The Internet has become a fundamental part of the worldwide economic and social infrastructure. It provides businesses, communities and individuals with a common global platform for communication and commerce. Internet-adopting nations have experienced enhancements to productivity, global competitiveness, and job growth. According to International Telecommunications Union (ITU) estimates, one-third of the world?s population of 7 billion were Internet users in 2011 [ITU, 2012].

In parallel, the increasingly intelligent mobile phone has become the most widely used communications device in the world. The ITU estimates that there were some 6 billion mobile service subscriptions by the end of 2011, that mobile broadband services grew by 40 percent worldwide in 2011, and that there are now twice as many mobile broadband subscriptions as fixed ones. The Boston Consulting Group forecasts [BCG, 2012] that by 2016, mobile devices such as smartphones and tablets could account for four out of five broadband connections.

Smartphones are already in widespread use, tablet computers are becoming increasingly popular, and laptops now compete with desktop PCs in functionality. As high-speed mobile Internet service becomes more readily available and affordable, mobile devices are being used widely for business applications as well as for personal and social purposes. As use of the broadband platform changes from primarily wireline to increasingly wireless, this will create challenges of both measurement and understanding. Particular attention needs to be paid to the growth of mobile broadband services and usage.

This paper builds on the 2012 TPRC paper by the same authors, on broadband adoption and use in Canada and the United States [McConnaughey et al, 2012]. Although very different in their population densities, the two countries have many similarities in terms of geography, demographic patterns, socio-economic factors, and challenges hindering universal broadband Internet adoption. The paper focuses on a comparison and evaluation of broadband adoption and usage results from major national surveys: the Statistics Canada Canadian Internet Use Survey (CIUS) for the years 2005, 2007, 2009, 2010, and 2012, and the U.S. Census Bureau?s Current Population Survey (CPS) Computer and Internet Use Supplement for the years 2009, 2010, 2011, and 2012. Broadband Internet availability data and subscription rates come from the CRTC?s annual Communications Monitoring Report as well as the NTIA?s National Broadband Map and the FCC?s Internet Use Services (FCC Form 477) reports. The CPS results used in this paper are taken from the Digital Nation series of reports published by the Department of Commerce.

In our analysis, we examine online activities in detail. We pay particular attention to usage disparities based on socio-demographic categories and geographic location of use, as reported in rural and urban areas. Subject to data limitations, we also look at mobile usage (e.g., how the use of smartphones and other mobile devices may differ from use of PCs from a fixed location), and whether a ?Mobility Divide? might be starting to develop. Further, we explore policy ramifications in light of our findings regarding online activity patterns, drawing comparisons and contrasts between Canada and the United States where appropriate. Detailed breakouts by socio-demographic factors and geography give information on which to base targeted demand side policies. Such policies can both address adoption and usage gaps and can complement the more usual supply side policies used to address availability shortfalls.

Moderators
Speakers
PN

Prabir Neogi

Carleton University, Canada-India Centre for Excellence


Sunday September 29, 2013 11:45am - 12:20pm
GMUSL Room 120

11:45am

Metrics for Assessing Internet Business Models and Sustainability
Download Paper

As the internet changes structure and behavior in ways better modeled in terms of modular composition than as an architecture based on functional links between physical layers and services, well established network metrics appear increasingly less suited to assessing economic and business strategy features. This change is described by Labovitz (Labovitz, Lekel-Johnson et al. 2010), Fransman (Fransman 2010), Clark (Clark, Lehr et al. 2011), Yoo (Yoo 2012), Frischmann (Frischmann 2012) and in our own recent work (Liebenau, Elaluf-Calderwood et al. 2012). Increasingly, measurement approaches such as those conducted by the Center for Advanced Internet Data Analysis at the University of California, San Diego [CAIDA] are reassessing what when, how and why metrics are collected.

The structure of emerging business models, the value generated from certain kinds of products and services delivery, and the spillover effects that concern public policy are all masked by the typical focus on broadband speed, routing tables and traffic estimates between nodes. Along with current discussions about ?quality of experience? (Kruse 2009), and new approaches to concentration (Noam 2010), we address the problem of how to assess information asymmetries (Claffy 2008), the relationships among different kinds of traffic (transit, private, peered, intra-network, etc.), flows of funds and other features that characterize the interrelationships of all major aspects of internet core businesses, whether regulated or not.

In this paper we describe a view of how changes to the layered model of internet architecture has created space for new services and products and in particular the mixing of roles across previous boundaries described in part by the distinction between regulated and unregulated segments. We then consider the utility of current metrics and show how additional or alternative metrics can enhance our understanding such that we can come to a far better view of trends and practices on the internet. We show what minimum requirements there are to data that ought to be visible to all stakeholders of the internet.

Through a detailed taxonomy of different sources of internet metrics (e.g. traffic through internet exchanges, charges for services, Cisco network data, Sandvine traffic statistics, etc.) we show the utility of new ways to capture and assess the important economic characteristics that will help in promoting the sustainability of the internet. We begin with a critique of a typical matrix describing web measurements associated with the physical and logical layers of the internet (e.g. from CENDI of the U.S. Government; Hodge, 2000). We assess the value of the criteria traditionally used and show how some are misleading, some inappropriate, and some useful when considered in conjunction with new criteria of assessment. We show how a careful inclusion of jurisdictional information, government fiscal policies and regulation, costing estimates, maintenance, network qualities and other evidence from political economy can provide a superior metric that will provide the basis for better economic analyses and in judging business models and sustainability.

References
Claffy, K. (2008). Ten Things Lawyers should know about the Internet. San Diego,
USA, CAIDA - University of San Diego: 24.
Clark, D., W. Lehr, et al. (2011). Interconnection in the Internet: the policy challenge.
39th Research Conference on Communication, Information and Internet Policy. George Mason University, Arlington, VA.
Fransman, M. (2010). The New ICT Ecosystem: Implications for Policy and
Regulation. Cambridge, Cambridge University Press.
Frischmann, B. (2012). Infrastructure - The Social Value of Shared Resources.
Oxford, Oxford University Press.
Hodge, G. (2000). Web Metrics and Evaluation: Current State of Implementation
Among the CENDI Agencies PHASE 1. Oak Ridge, Tennessee, Information
International Associates, Inc: 23.
Kruse, J. (2009). Priority and Internet Quality Diskussionpapierreihe - Working
Paper Series. Hamburg, Germany, Department of Economics - Helmut Schumidt
Universitat: 24.
Labovitz, C., S. Lekel-Johnson, et al. (2010). Internet inter-domain traffic.
SIGCOMM'20, New York, USA, ACM Digital Library.
Liebenau, J., S. Elaluf-Calderwood, et al. (2012). "Strategic Challenges for the
European Telecom Sector: The Consequences of Imbalances in Internet Traffic."
Journal of Information Policy 2: 248-272.
Noam, E. M. (2010). "Media Ownership and Concentration in America." The Journal
of American Culture 33(5): 348-349.
Yoo, C. S. (2012). The Dynamic Internet: How Technology, Users and Businesses are
Transforming the Network, AEI Press.


Sunday September 29, 2013 11:45am - 12:20pm
GMUSL Room 329

11:45am

Securitizing Critical Infrastructure, Blurring Organizational Boundaries: The U.S. Einstein Program
Download Paper

Objective: The objective of this research is to understand how organizations and policies were altered by a new information security technology, namely intrusion detection and intrusion prevention systems (IDS/IPS). The paper tracks the progress of the U.S. federal government?s Einstein program between 2003 and 2013, which implemented first IDS and later IPS capabilities in U.S. government agency networks. IDS/IPS inspects data packets in real-time and decides how to treat network traffic based on automated recognition of threats. The paper will: 1) analyze the securitization of U.S. information infrastructure; and 2) assess the potential implications for U.S. telecommunications policy.



Method: We draw on interviews with principal actors, documentary evidence from federal privacy impact assessment reports, policy documents and news reports to track the progress of the program. Theoretically, we draw on new institutional economics, specifically transaction costs theory, to explain how security technologies can create new dependencies or new forms of supervision across organizational boundaries, or require hierarchies where before there were market transactions or looser, networked forms of cooperation.



Findings: Cybersecurity efforts can alter the boundaries between governmental and private networks. The implementation of cybersecurity policies struggled to maintain a clear line between security and surveillance.



As Einstein progressed, the relationships between private sector operators of Internet infrastructure and the government?s Internet security initiatives proved to be especially sensitive. The DPI technology required greater coordination and some degree of organizational centralization, as well as new forms of information sharing between military intelligence agencies and civilian agencies. This restructuring profoundly affected the relationships between ISPs and the U.S. government. The civilian Department of Homeland Security (DHS) had to serve as the ?trusted intermediary? between the private sector actors and military and intelligence agencies. Initially the DPI equipment was housed in the government agencies, but later the ISPs retained control of the DPI equipment but were given signatures by federal agencies. In a 2013 executive order the government extended Einstein to include networks of the private sector Defense Industrial Base (DIB) and Critical Infrastructure (CI) companies. Through contractual arrangements, those entities can receive Einstein capabilities provided by telecom providers AT&T and CenturyLink and defense companies Lockheed Martin and Raytheon.



Implementation of the technology, in other words, threw up for negotiation the question of authoritative responsibilities between civilian and the military/intelligence government entities and the boundaries between the public and the private sector in cybersecurity efforts. These dynamics triggered strong apprehensions about surveillance, security and civil liberties, as well as concerns about the distribution of costs and risks, which in turn seem to have had a strong impact on the way the technology was implemented.



Contribution: This article contributes to the debate on the Internet?s securitization (e.g. Dunn Cavelty, 2008) by providing an empirical, longitudinal case study on technology deployment and its institutional effects. The emphasis on transaction costs provides a suitable analytical method for understanding the factors that created both pressure for and resistance to organizational changes.





References:

Dunn Cavelty, Myriam (2008). Cyber-Security and Threat Politics: US Efforts to Secure the Information Age. Routledge.

Moderators
Speakers
avatar for Milton Mueller

Milton Mueller

Professor, Georgia Institute of Technology
(TBC) Milton Mueller is Professor at the School of Public Policy, Georgia Institute of Technology, USA. Mueller received the Ph.D. from the University of Pennsylvania’s Annenberg School in 1989. His research focuses on rights, institutions and global governance in communication and information industries. He is the author of two seminal books on Internet governance, Ruling the Root and Networks and States. Mueller was one of the founders of... Read More →


Sunday September 29, 2013 11:45am - 12:20pm
GMUSL Room 332

12:15pm

Contextual Expectations of Privacy in Self-Generated Health Information Flows
Download Paper

Rapid developments in health self-quantification via ubiquitous computing devices point to a future in which individuals will regularly collect health and wellness information using smart phone apps and health sensors, and share it online for purposes of self-experimentation, community building, and research. Already, one-quarter of adult Internet users report tracking their own health data online, and although more than 40,000 mobile medical apps are now available on the market, the field is projected to grow 25% annually. However, self-tracking users of consumer-grade health tools may inadvertently reveal private facts that they do not intend to share with others, including their locations, movement habits, sensitive medical conditions, psychological states, and personal behaviors. These unintended data flows may support reliable health inferences due to growing, contemporary practices of data mining and profiling, leading to fears of employment and insurance discrimination, dignitary harms, and the gradual unraveling of social expectations of privacy in health information.

In the absence of clear statutory or regulatory architectures for self-generated health data, its privacy and security rests heavily on robust individual data management practices, which in turn rest in part on users? understandings of legal protections and commercial terms of service governing information flow. However, little is known about how individuals understand their privacy rights in self-generated wellness and health data under existing laws or privacy policies, or how these beliefs guide information management practices. Users value privacy but often misapprehend their legal protections. In the present case, we caution that if individuals consider self-generated wellness information to be medical in nature, they may be more likely to believe it to be protected by HIPAA and other privacy laws, leading to less privacy-protective information sharing behaviors.

In the present study, we conduct twenty in-depth, structured qualitative interviews with users of the popular self-quantification service Fitbit to answer the following questions: (1) How do self-tracking individuals understand their privacy rights in self-generated health information versus clinically generated medical information?; (2) In what ways do user beliefs about perceived privacy protections guide data management practices?; (3) How closely do user beliefs (and preferences) comport with actual existing legal protections and privacy policies? Participants also complete card-sorting task that allow for deeper reflections upon data sharing preferences, as well as answering several established survey questions designed to assess privacy knowledge.

Early data suggests that research participants regard self-generated information as quasi-medical in nature, relying upon it to actively make changes to their health. However, they are also aware that the context in which information is collected and shared does have consequences for its subsequent legal protections, and thus take pains to manage information collection and dissemination thoughtfully. Individuals exhibit highly granular sharing preferences about the nature of the information collected and the likely recipients of that information, including the purposes for which they believe information should legitimately be used. Particularly, information-sharing preferences are strongly guided by context-dependent social norms regarding appropriate second-order health information transmissions, such as confidentiality. This leads to situations in which doctors and family are trusted recipients (because it is understood that once shared, information will not continue to change hands), while commercial researchers, insurance companies, and marketers are less-preferred information recipients for the same reason. Thus, even if users are not always correct about their legal rights in self-generated health data, they aspire to guide their personal health data flows, and would benefit from commercial privacy settings and policies that afford them simple, transparent, and highly granular control over information that maps to internal representations of appropriate data flow norms.

This paper will be presented as a working paper at the 2013 Privacy Law Scholars Conference at the University of California, Berkeley; it has not been published (or submitted for publication) elsewhere.

Moderators
Speakers

Sunday September 29, 2013 12:15pm - 12:55pm
GMUSL Room 221

12:20pm

Telecom License Fees & Risks in Sub-Saharan Africa: Towards Better Licensing Policies
Download Paper

Disparities in telecom license fees between countries might lead to skepticism of investors about the “real” value of a license and to reluctance to apply for those. This paper introduces a preliminary study on the relationship between license prices and host country environment in the African mobile sector. It is based on an original database that includes all mobile licenses acquisitions by the twelve biggest multinational operators that have invested in sub-Saharan Africa over the period 2000-2010. First part of the paper proposes a detailed classification of host country characteristics & risks faced by an operator investing in a telecom asset in an African country, raising the awareness of the challenges facing African mobile operators. Second part includes a preliminary quantitative analysis on the correlation between license fees and host country environment. Hypothesis has been made that there is a correlation between license prices and host country indicators. Last part reminds it is necessary to have relevant license prices, so that investments amounts better reflect the characteristics and risks faced by investors on this continent. The paper highlights the importance of an accurate valuation of licenses as well as policy process in the promotion of further telecommunications investment on the continent.

Moderators
avatar for Karen Rose

Karen Rose

Internet Society
Karen has been active across Internet technology, policy, and development for nearly 20 years, including prior roles in Internet start-ups, government, and management consulting. She began her career in public policy working on Internet and e-commerce issues at the U.S. Federal Communications Commission. She later joined the National Telecommunications and Information Administration of the U.S. Department of Commerce where her work focused on the... Read More →

Speakers
avatar for Aude Schoentgen

Aude Schoentgen

PhD candidate, Télécom ParisTech
My research interests are telecommunications, investments, entry strategies in emerging markets (Africa), asset valuation, operators & licenses acquisitions, risk management and infrastructure sharing.


Sunday September 29, 2013 12:20pm - 1:00pm
GMUSL Room 225

12:20pm

Drivers and Barriers to the Uptake of a FTTH Ultra-Fast Broadband in New Zealand
Download Paper

New Zealand (NZ) is currently implementing a high-speed fiber-to-the-home (FTTH) broadband project known as the Ultra-Fast Broadband (UFB) initiative. In this article we explore drivers and barriers of consumer adoption of UFB. We use a mixed methods approach for conducting empirical research, which includes interviewing broadband consumers, and analyzing secondary research insights from industry. Using grounded theory we postulate research frameworks for consumer adoption of UFB access (CAUA), and consumer adoption of UFB content (CAUC) to portray pertinent consumer drivers, barriers, and deciding factors for the UFB initiative. We find that consumer awareness and pricing are main factors that need to be addressed for a successful UFB rollout. This case study presents us with a timely unique opportunity to analyze the issues involved as the market evolves and stabilizes. This research has the advantage of gathering consumer resistance insights from an early phase of technological introduction.

Moderators
Speakers

Sunday September 29, 2013 12:20pm - 1:00pm
GMUSL Room 120

12:20pm

The Impact of Bureaucratic Structures of the Regulatory Authorities on Diffusion of Telecommunication Technology: A Cross-National Analysis of the Regulatory Governance and its Impact on VoIP Regulation
Download Paper

Advent of mobile Internet has brought the telecommunication market on the cusp of new era: voice services no longer dominate use. The regulators are challenged with new innovations, surging demand, and evolving market. We conduct a cross-national analysis to ascertain the effect of regulatory governance on the diffusion of technology. The present paper aspires to ascertain the effect of regulatory governance on the regulatory decisions regarding embracing Voice over IP (VoIP) technology.

Numerous researches have been conducted to understand the impact of the regulatory practices both in the developing and developed world. However, empirical research on these issues is scarce. We examine the impact of the institutional environment in the regulatory decision making process. We show that the structure of the regulatory body has high correlation with the uncertainties in the regulatory environment that bars diffusion of new innovations.

The operators and vendors in the telecommunication market adjust their responses as the regulatory directives alter. However, hasty decisions, inclination towards micro-management, lack of long term planning, and frequent change in regulatory directives render the firms unable to anticipate the regulatory change. The uncertainty in the regulatory decision making process therefore, can hinder the operator?s plans of network expansion, introduction of new technology, price structure and can ultimately hinder the process of diffusion of technology. Our hypothesis is that when all things are equal, operators would invest more and keep prices low in an environment where regulatory decisions are not subject to frequent change.

Telecommunication regulatory authorities around the world are of different types- some are independent, some are semi-independent and some work as dependent organizations within the bureaucracy. The structure of the organization in which decisions are made may affect whether the organization has a predisposition to be more lenient or more stringent in taking public policy. Decisions made by the regulators are influenced by the structure of their governing boards. The regulatory bodies, which have strong presence of former employees of the incumbent operator might have predilection towards the former work place. Similarly, the regulatory bodies comprised of bureaucrats might prefer to serve the undue governmental or political purposes resulting in regulatory capture. This phenomenon could well result in a regulatory failure to cater to the greater need. Along with this phenomenon, the personal relationship of the regulators with the policy makers of the Government, role of judiciary, the way the bureaucracy deals with public pressure and the extent to which it gets influenced by the national and multinational operators also have impact on various decisions.

The existing literature on the importance of bureaucratic structure have divergent views on the matter- some sees the regulators as the mechanism to avoid market failure, some believes presence of strong judiciary dwarfs the need of regulatory authorities and others believe that for better result the scope of the government regulations should be minimal at best. Although many have shed lights on the theoretical issues of the structural compositions, very few works have scrutinized these issues empirically. We believe that as governments around the world have relied on the regulatory authorities for the better management of the market, it would be prudent to look at the structural composition of the regulatory bodies instead of scrutinizing the importance of the very existence of the regulatory bodies. This research empirically examines the impact of the structure of the regulatory board on the outcome.

Our hypothesis is, a regulatory body that enjoys independence (both financial and political) and has the presence of experts from various backgrounds (engineers, economists, lawyers), and has a good mix of former employees of the incumbents and the alternate operators can reduce the regulatory uncertainty. Hence the research question we try to answer is: do independence and the structure of the regulatory board reduce regulatory uncertainty and thus positively influence diffusion of new technology? We try to answer this question in light of regulatory decisions related to VoIP legalization.

We conduct an econometric analysis of the effect of regulatory uncertainty using a unique multi-country panel dataset accumulated from various data sources including the ITU, POLCON database, World Bank, Freedom House, and various regulatory authorities. The dataset has ten years-long (2000-2011) information regarding telecommunication regulatory environment, structure of the bodies, political pressure, discretionary limits, demographic and mobile-industry data for 127 countries of the world. The dependent variable is a proxy of regulatory decisions. In this research, we have focused on the regulator?s decision to legalize VoIP-PSTN interconnection. Voice over IP (VoIP) in various forms, has become a prevalent technology for voice communication. However, many countries around the world have not yet legalized VoIP-PSTN interconnection. We posit that the regulatory bodies that have already legalized the communication method are forward looking, pro-innovation and capable of introducing new and disruptive technology in the technology-market place.

The analysis shows that structure of the regulatory board has positive correlation with the regulatory decisions related to technology diffusion. We also find that the uncertainty around the regulatory decisions significantly affects the relative rate of telecommunication infrastructure deployment. Uncertainty on the regulatory environment causes price volatility in the retail market and slows the growth of the technology diffusion. The study provides new insights to understand the governance factors that are necessary to ensure effective telecommunication regulation in the changing telecommunication market-place.

Moderators
Speakers

Sunday September 29, 2013 12:20pm - 1:00pm
GMUSL Room 329

12:20pm

Spam and Botnet Reputation Randomized Control Trials and Policy
Download Paper

Designing randomized control trials (RCT) of reputational effects of spam and botnet rankings as proxies for Internet security has interesting challenges. These challenges are related to the policy issues such reputation is intended to address. Building on preliminary results and the public SpamRankings.net top 10 rankings per country by spam volume from two anti-spam blocklists (see TPRC 2012 and 2011 papers), formal RCT experiments provide another level of evidence. However, using RCT with thousands of organizations in treatment and control groups raises numerous difficulties in non-homogeneous legal and organizational regimes and potential active opposition. Fortunately most of these difficulties can be turned to advantages, and all have policy implications.

These complications compared to RCTs of more traditional econometric one-shot surveys with single publication arise because the subject of these field experiments is the live Internet in real time with ongoing updated treatments. The experimental treatments themselves act as information security (infosec), since their purpose is to cause improvements in infosec in treated companies through reputation. The treatments thus must adapt to changes in conditions in the Internet as they happen. Like other infosec, to be effective the treatments must also be portable across departments within treated organizations plus customers and investors, and the experimental team itself crosses economics, MSIS, and computer science.

If the experiments demonstrate statistical evidence that this reputational approach works, such results will provide a new policy approach of reputational rankings, plus the beginnings of tools to apply that approach, ranging from the public treatments themselves to drilldowns into underlying details of the symptoms causing good or bad reputation.

Difficulties encountered include:

1) Differing sensitivities of different blocklists to spam from certain sources; sensitivities that change over time as the blocklists adapt to new miscreant behavior.

Approach: A weighted composite ranking based on both spam volume and spamming address count from at least two different blocklists.

2) Heterogeneity of legal regimes and other characteristics across countries.

Approach: Initial experiments within a single country (the U.S.), perhaps followed by clustered RCT using countries as clusters.

3) Availability of organizational characterization information for stratification by industry (finance, medical, etc.) and within industry (ISPs or hosting, telephone company or cable company, etc.).

Approach: Start with the U.S., for which this information is relatively readily available in homogeneous form.

4) Public visibility is necessary for reputation so that customers and investors of treated rganizations can see the treatments, yet limits flexibility of experimental treatments, since an ongoing, regularly updated treatment once deployed is hard to retract.

Approach: Start with a subset of the universe of spamming organizations and deploy more treatments for other organizations later, plus potential additional treatments for already-treated organizations, while tuning existing treatments like product releases.

5) Spammers or bot herders could choose to migrate away from treated organizations to untreated (control) organizations, interfering with independence of treated and control groups.

Approach: Use botnet volume and address data to observe whether this actually happens.

6) Miscreants may actively retaliate with DDoS or other attacks.

Approach: Harden the treatment websites by hosting them in a cloud provided by a very large organization.

Preliminary RCT results are expected by the paper completion deadline, and will be presented, while the series of experiments supported by NSF grant 0831338 will continue, and the usual disclaimers apply.


Sunday September 29, 2013 12:20pm - 1:00pm
GMUSL Room 332