Conference PaperPDF Available

BatteryLab, A Distributed Power Monitoring Platform For Mobile Devices

Authors:
  • Brave Software

Abstract and Figures

Recent advances in cloud computing have simplified the way that both software development and testing are performed. Unfortunately, this is not true for battery testing for which state of the art test-beds simply consist of one phone attached to a power meter. These test-beds have limited resources, access, and are overall hard to maintain; for these reasons, they often sit idle with no experiment to run. In this paper, we propose to share existing battery testing setups and build BatteryLab, a distributed platform for battery measurements. Our vision is to transform independent battery testing setups into vantage points of a planetary-scale measurement platform offering heterogeneous devices and testing conditions. In the paper, we design and deploy a combination of hardware and software solutions to enable BatteryLab's vision. We then preliminarily evaluate BatteryLab's accuracy of battery reporting, along with some system benchmarking. We also demonstrate how BatteryLab can be used by researchers to investigate a simple research question.
Content may be subject to copyright.
BatteryLab, A Distributed Power Monitoring
Platform For Mobile Devices
https://batterylab.dev
Matteo Varvello, Kleomenis Katevas, Mihai Plesa, Hamed Haddadi†⋄,
Benjamin Livshits
Brave Software, Imperial College London
ABSTRACT
Recent advances in cloud computing have simplified the way
that both software development and testing are performed.
Unfortunately, this is not true for battery testing for which
state of the art test-beds simply consist of one phone attached
to a power meter. These test-beds have limited resources, ac-
cess, and are overall hard to maintain; for these reasons, they
often sit idle with no experiment to run. In this paper, we
propose to share existing battery testing setups and build Bat-
teryLab, a distributed platform for battery measurements. Our
vision is to transform independent battery testing setups into
vantage points of a planetary-scale measurement platform
offering heterogeneous devices and testing conditions. In the
paper, we design and deploy a combination of hardware and
software solutions to enable BatteryLab’s vision. We then pre-
liminarily evaluate BatteryLab’s accuracy of battery reporting,
along with some system benchmarking. We also demonstrate
how BatteryLab can be used by researchers to investigate a
simple research question.
ACM Reference Format:
Matteo Varvello, Kleomenis Katevas, Mihai Plesa, Hamed Haddadi,
Benjamin Livshits. 2019. BatteryLab, A Distributed Power Monitor-
ing Platform For Mobile Devices: https://batterylab.dev. In The 18th
ACM Workshop on Hot Topics in Networks (HotNets ’19), Novem-
ber 13–15, 2019, Princeton, NJ, USA. ACM, New York, NY, USA,
8 pages. https://doi.org/10.1145/3365609.3365852
1 INTRODUCTION
The mobile device ecosystem is large, ever growing, and very
much “location-based”, i.e., different devices and operating
systems (Android and iOS) are popular at different locations.
Advances in cloud computing have simplified the way that
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with
credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee. Request
permissions from permissions@acm.org.
HotNets ’19, November 13–15, 2019, Princeton, NJ, USA
© 2019 Association for Computing Machinery.
ACM ISBN 978-1-4503-7020-2/19/11. . . $15.00
https://doi.org/10.1145/3365609.3365852
mobile apps are tested, today. Device farms [
5
,
22
] let devel-
opers test apps across a plethora of mobile devices, in real
time. Device diversity for testing is paramount since hard-
ware and software differences might impact how an app is
displayed or performs.
To the best of our knowledge, no existing device farm of-
fers hardware-based battery measurements, where the power
drawn by a device is measured by directly connecting its bat-
tery to an external power meter. Instead, few startups [
18
,
23
]
offer software-based battery measurements where device re-
source monitoring (screen, CPU, network, etc.) are used to
infer the power consumed by few devices for which a cali-
bration was possible [
12
]. This suggests a demand for battery
measurements, but a prohibitive cost for deploying hardware-
based solutions.
In the research community, hardware-based battery mea-
surements are instead quite popular [
10
,
11
,
19
,
33
]. The
common research approach consists of buying the required
hardware (often an Android device and a Monsoon power
monitor [
25
]), set it up on a desk, and then use it sporadically.
This is because such battery testbeds are intrinsically local,
i.e., they require a researcher or an app tester to have physical
access to the device and the power meter.
In this paper, we challenge the assumption that a battery
testbed needs to be local and propose BatteryLab [
8
], a dis-
tributed platform for battery measurements. Similarly to Plan-
etLab [
29
], our vision is a platform where members contribute
hardware resources (e.g., some phones and power monitor) in
exchange of access to the hardware resources offered by other
platform members. As new members join over time and from
different locations, BatteryLab will naturally grow richer of
new and old devices, as well as of devices only available at
some specific locations.
BatteryLab’s architecture consists of an access server
which enables an end-to-end test pipeline while supporting
multiple users and concurrent timed sessions — and sev-
eral vantage points,i.e., the local testbeds described above.
Vantage points are enhanced with a lightweight controller
hosted on a Raspberry Pi [
31
] — which runs BatteryLab’s
software suite to enable remote testing, e.g., SSH channel
with the access server and device mirroring [
15
] which pro-
vides full remote control of test devices, via the browser.
HotNets ’19, November 13–15, 2019, Princeton, NJ, USA Varvello et al.
We first evaluate BatteryLab with respect to the accuracy
of its battery readings. This analysis shows that the required
extra BatteryLab’s hardware has negligible impact on the
power meter reporting. It also shows a non-negligible cost
associated with device mirroring, suggesting that it should
only be used when devising a test. Such headless mode is not
always possible, e.g., if usability testing is the goal. In this
case, the extra battery consumption associated with mirroring
should be accounted for.
Finally, we demonstrate BatteryLab usage investigating a
simple research question: which of today’s Android browser
is the most energy efficient? To answer this question, we
automated the testing of four popular browsers (Chrome,
Firefox, Edge, and Brave) via BatteryLab. Our results show
that Brave offers minimal battery consumption, while Firefox
tends to consume the most. We further augment this result
across multiple locations (South Africa, China, Japan, Brazil,
and California) emulated via VPN tunneling.
2 RELATED WORK
This work was mainly motivated by the frustration of not
finding a tool offering easy access to battery measurements.
Several existing tools could leverage some of BatteryLab’s
ideas to match our capabilities in a paid/centralized fashion.
For example, device farms such as AWS Device Farm [
5
]
and Microsoft AppCenter [
22
] could extend their offer using
our hardware and software components. The same is true for
startups like GreenSpector [
18
] and Mobile Enerlytics [
23
],
which offer software-based battery testing.
To the best of our knowledge, MONROE [
1
] is the only
measurement platform sharing some similarities with Battery-
Lab. This is a platform for experimentation in operational mo-
bile networks in Europe. MONROE currently has presence in
4 countries with 150 nodes, which are ad-hoc hardware config-
urations [
24
] designed for cellular measurements. BatteryLab
is an orthogonal measurement platform to MONROE since
it targets real devices (Android and iOS) and fine-grained
battery measurements. The latter requires specific instrumen-
tation (bulky power meters) that cannot be easily added to
MONROE nodes, especially the mobile ones. In the near fu-
ture, we will explore solutions like BattOr [
32
] to potentially
enhance BatteryLab with mobility support.
Last but not least, BatteryLab offers full access to test
devices via mirroring. This feature was inspired by [
2
], where
the authors build a platform to allow an Android emulator to
be accessed via the browser, with the goal to “crowdsource”
human inputs for mobile apps. We leverage the same concept
to allow remote access to BatteryLab, but also further extend
it to actual devices and not only emulators.
3 BATTERYLAB
This section details the design and implementation of Battery-
Lab, a distributed measurement platform for device battery
monitoring (see Figure 1(a)). We currently focus on mobile
devices only, but our architecture is flexible and we thus plan
to extend to more devices, e.g., laptops and IoT devices.
One or multiple test devices (a phone/tablet connected to
a power monitor) are hosted at some university or research
organization around the world (vantage points). BatteryLab
members (experimenters) gain access to test devices via a
centralized access server, where they can request time slots
to deploy automated scripts and/or ask remote control of
the device. Once granted, remote control of the device can
be shared with testers, whose task is to manually interact
with a device, e.g., search for several items on a shopping
application. Testers are either volunteers, recruited via email
or social media, or paid, recruited via crowdsourcing websites
like Mechanical Turk [4] and Figure Eight [13].
3.1 Access Server
The main role of the access server is to manage the van-
tage points and schedule experiments on them based on ex-
perimenter requests. We built the access server atop of the
Jenkins [
20
] continuous integration system which is free,
open source, portable (as it is written in Java) and backed by
an active and large community. Jenkins enables end-to-end
test pipelines while supporting multiple users and concurrent
timed sessions.
BatteryLab’s access server runs in the cloud (Amazon Web
Services) which enables further scaling and cost optimization.
Vantage points have to be added explicitly and pre-approved
in multiple ways (IP lockdown, security groups). Experi-
menters need to authenticate and be authorized to access
the web console of the access server. For increased security,
this is only available over HTTPS.
The access server communicates with the vantage points
via SSH. New BatteryLab members grant SSH access from
the server to the vantage point’s controller via public key and
IP white-listing (see Section 3.4). Experimenters can access
vantage points via the access server, where they can create
jobs to be deployed in their favorite programming language.
Only the experimenters that have been granted access to the
platform can create, edit or run jobs and every pipeline change
has to be approved by an administrator. This is done via a
role-based authorization matrix.
BatteryLab’s Python API (see Table 1) is available to pro-
vide user-friendly device selection, interaction with the power
meter, etc. The access server will then dispatch queued jobs
based on experimenter constraints, e.g., target device, con-
nectivity, or network location, and BatteryLab constraints,
e.g., one job at the time per device. By default, the access
server collects logs from the power meter which are made
available for several days within the job’s workspace. An-
droid logs like
logcat
and
dumpsys
can be requested via
the execute_adb API, if available.
We have developed several jobs which manage the vantage
points. These jobs span from updating BatteryLab wildcard
certificates (see Section 3.4), to ensure the power meter is
BatteryLab HotNets ’19, November 13–15, 2019, Princeton, NJ, USA
(a) Distributed architecture. (b) Vantage point design. (c) GUI.
Figure 1: BatteryLab’s infrastructure.
not active when not needed (for safety reasons), or to factory
reset a device. These jobs were motivated by our needs while
building the system, and we expect more to come over time
and as the system grows.
3.2 Vantage Point
Figure 1(b) shows a graphical overview of a BatteryLab’s
vantage point with its main components: controller, power
monitor, test devices, circuit switch, and power socket.
Controller
– This is a Linux-based machine responsible for
managing the vantage point. The machine should be equipped
with both Ethernet and WiFi connectivity, a USB controller
with a series of available USB ports, as well as with an ex-
ternal General-Purpose Input/Output (GPIO) interface. We
use the popular Raspberry Pi 3B+ [
31
] running the latest
version of Raspbian Stretch (April 2019) that meets these
requirements with an affordable price.
The controller’s primary role is to manage connectivity
with test devices. Each device is connected to the controller’s
USB port, WiFi access point (configured in NAT or Bridge
mode), and Bluetooth. USB connectivity is used to power
each testing device when not connected to the power monitor
and to instrument it via the Android Debugging Bridge [
16
]
(ADB), if available. WiFi connectivity is used to allow au-
tomation without the extra USB current, which interferes with
the power monitoring procedure. (De)activation of USB ports
is realized using
uhubctl
[
27
]. Bluetooth connectivity is
used for automation across OSes (Android and iOS) and con-
nectivity (WiFi and cellular). Section 3.3 will discuss several
automation techniques supported by BatteryLab.
The second role of the controller is to provide device mir-
roring,i.e., easy remote access to the device under test. We
use VNC (
tigervnc
[
34
]) to enable remote access to the
controller. We further use
noVNC
[
28
], an HTML VNC li-
brary and application, to provide easy access to a VNC session
via a browser without no further software required at an ex-
perimenter or tester. We then mirror the test device within
the noVNC/VNC session and limit access to only this visual
element. In Android, this is achieved using
scrcpy
[
15
], a
screen mirroring utility which runs atop of ADB. No equiv-
alent software exists for iOS, but a similar functionality can
be achieved combining AirPlay Screen Mirroring [
6
] with
(virtual) keyboard keys (see Section 3.3).
Figure 1(c) shows a snapshot of the graphical user interface
(GUI) we have built around the default
noVNC
client. The
GUI consists of an interactive area and a toolbar. The inter-
active area (bottom of the figure) is the area where a device
screen is mirrored. As a user (experimenter or tester) hovers
his/her mouse within this area, (s)he gains access to the de-
vice currently being mirrored, and each action is executed
on the physical device. The GUI connects to the controller’s
backend using AJAX calls to some internal restful APIs. The
toolbar occupies the top part of the GUI, and implements a
convenient subset of BatteryLab’s API (see Table 1). Even
though the toolbar was initially thought as a visual helper for
an experimenter, it is also useful for less experienced test par-
ticipants. For this reason, BatteryLab allows an experimenter
to control the presence or not of the toolbar on the webpage
to be shared with a test participant.
Power Monitor
– This is a power metering hardware capable
of measuring the current consumed by a test device in high
sampling rate. BatteryLab currently supports the Monsoon
HV [
25
], a power monitor with a voltage range of 0.8V to
13.5V and up to 6A continuous current sampled at
5KH z
. The
Monsoon HV is controlled using its Python API [
26
]. Other
power monitors can be supported, granted that they offer APIs
to be integrated with BatteryLab’s software suite.
Test Device(s)
– It is a mobile device (phone or tablet) that
can be connected to a power monitor. While we recommend
HotNets ’19, November 13–15, 2019, Princeton, NJ, USA Varvello et al.
API Description Parameters
list_devices List ADB ids
of test devices -
device_mirroring Activate device
mirroring device_id
power_monitor Toggle Monsoon
power state -
set_voltage Set target
voltage voltage_val
start_monitor Start battery
measurement device_id, duration
stop_monitor Stop battery
measurement -
batt_switch (De)activate
battery device_id
execute_adb Execute ADB
command device_id, command
Table 1: BatteryLab’s API.
phones with removable batteries, more complex setups re-
quiring to (partially) tear off a device to reach the battery
are possible. Note that, on Android, device mirroring is only
supported on devices running API 21 (Android 5.0).
Circuit Switch
– This is a relay-based circuit with multiple
channels that lies between the test devices and the power
monitor. The circuit switch is connected to the controller’s
GPIO interface and all relays can be controlled via software
from the controller. Each relay uses the device’s voltage (+)
terminal as an input, and programmatically switches between
the battery’s voltage terminal and the power monitor’s
Vout
connector. Ground (-) connector is permanently connected to
all devices’ Ground terminals.
This circuit switch has two main tasks. First, it allows to
switch between a direct connection between the phone and
its battery, and the “battery bypass”—which implies discon-
necting the battery and connecting to the power monitor. This
is required to allow the power monitor to measure the current
consumed during an experiment. Second, it allows BatteryLab
to concurrently support multiple test device without having
to manually move cables around.
WiFi Power Socket
– This is used to allow the controller to
turn the Monsoon on and off, when needed. It connects to the
controller via WiFi and it is controlled with some simple API.
The current BatteryLab software suite only supports Meross
power sockets by integrating the following APIs [
14
]. In the
near future we will replace this power socket by extending
the capabilities of the circuit switch.
3.3 Automation
BatteryLab supports three mechanisms for test automation,
each with its own set of advantages and limitations.
Android Debugging Protocol
(Android) – ADB [
16
] is a
powerful tool/protocol to control an Android device. Com-
mands can be sent over USB, WiFi, or Bluetooth. While USB
guarantees highest reliability, it interferes with the power
monitor due to the power sent to activate the USB micro-
controller at the device. This is solved by sending commands
over WiFi or Bluetooth. However, using WiFi implies not
being able to run experiments leveraging the mobile network,
and ADB-over-Bluetooth requires a rooted device. Based on
an experimenter needs, BatteryLab can dynamically switch
between the above automation solutions.
UI Testing
(Android and iOS) – This solution uses UI test-
ing frameworks (e.g., Android’s user interface tests [
17
] or
Apple’s XCTest framework [
7
]), to produce a separate ver-
sion of the testing app, configured with automated actions.
The advantage of this solution, compared with ADB, is that it
does not require a communication channel with the Raspberry
Pi. The main drawback is that it restricts the set of applica-
tions that can be tested since access to an app source code is
required.
Bluetooth Keyboard
(Android and iOS) – This approach au-
tomates a test device via (virtual) keyboard keys (e.g., locate
an app, launch it, and interact with it). The controller emulates
a typical keyboard service to which test devices connect via
Bluetooth. This approach is generic and thus works for both
Android and iOS devices, with no rooting needed. Since it
relies on Bluetooth, it also enables experiments on the cellular
network. The limitations are twofold. First, Android device
mirroring is not supported as it requires ADB. This is not
an issue for automated tests which can and should be run in
headless mode to minimize noise on the battery reporting
(see Figure 2). It follows that this limitation only applies to
usability testing (with real users) on a mobile network.
The second limitation is that the level of automation de-
pends both on the OS and app support for keyboard com-
mands. In Android, it can be challenging to match ADB’s
API with this approach. It should be noted though that, when
available, ADB can still be used “outside” of a battery mea-
surement. That is, operations needed before and after the
actual battery measurement (e.g., cleaning an app cache) can
still be realized using ADB over USB. When the actual test
starts, e.g., launch an app and perform some simple interac-
tions, we can then switch to Bluetooth keyboard automation.
3.4 How to Join?
Institutions interested in joining BatteryLab can do so by
following our tutorial [
9
]. In short, we recommend the hard-
ware to use and its setup. It is important for the controller
to be publicly reachable at the following configurable ports:
2222 (SSH, access server only), 8080 (GUI’s backend), 6081
(noVNC). Members will provide a human readable identifier
for the vantage point which will be added to BatteryLab’s
BatteryLab HotNets ’19, November 13–15, 2019, Princeton, NJ, USA
Figure 2: CDF of current drawn
(direct, relay, direct-mirroring, relay-mirroring).
DNS (e.g.,
node1.batterylab.dev
) provided by Ama-
zon Route53 [
3
]. Our wildcard
letsencrypt
[
21
] certifi-
cate will be provided at this point. Renewal of this certificate
is managed by the access server which also automatically
deploys it at each vantage point, when needed.
The next step consists of flashing the controller (Raspberry
Pi) with BatteryLab’s image. This will setup the most recent
Raspbian version, along with BatteryLab’s required code and
its configuration. Few manual steps are required to verify
connectivity, grant pubkey access to the access server, and
connect at least one Android device. At this point, the con-
troller should be visible at the access server, and the device
accessible at https://node1.batterylab.dev.
4 PRELIMINARY EVALUATION
This section preliminarily evaluates BatteryLab using its first
vantage point deployed at Imperial College London, UK. This
consists of a Monsoon power meter, a Samsung J7 Duo (An-
droid 8.0), a Raspberry Pi 3B+, and a Meross power socket.
We first evaluate BatteryLab’s accuracy in battery measure-
ments reporting. Next, we demonstrate its usage investigating
a simple research question. We further use this demonstra-
tion to benchmark BatteryLab’s system performance. Finally,
we experiment with the impact of multiple device locations
emulated via a VPN.
4.1 Accuracy
Compared to a classic local setup for battery measurements,
BatteryLab introduces some hardware (circuit relay) and soft-
ware (device mirroring) components that can impact the ac-
curacy of the measurements collected. We devised a simple
experiment where we compare three scenarios. First, a di-
rect scenario consisting of just the Monsoon power meter,
the testing device, and the Raspberry Pi to instrument the
Figure 3: Per browser energy consumption
(Brave, Chrome, Edge, Firefox).
power meter. For this setup, we strictly followed Monsoon
indications [
25
] in terms of tape, cable type and length, and
connectors to be used. Next, we introduce two additional
scenarios: a relay scenario where the relay circuit is used to
enable BatteryLab’s programmable switching between mul-
tiple devices as well as between battery bypass and regular
battery operation (see Section 3.2). Finally, a mirroring sce-
nario where the device screen is mirrored to an open noVNC
session. While the relay is always “required” for BatteryLab
to properly function, device mirroring is only required for
usability testing.
Figure 2 shows the Cumulative Distribution Function (CDF)
of the current consumed in each of the above scenarios dur-
ing a 5 minutes test. For completeness, we also consider a
direct-mirroring scenario where the device is directly con-
nected to Monsoon and screencasting is active. During the
test, we play an mp4 video pre-loaded on the device sdcard.
The rationale is to force the device mirroring mechanism to
constantly update as new frames are originated. The figure
shows negligible difference between the “direct” and “relay”
scenario, regardless of the device mirroring being active or
not. A larger gap (median current grows from 160 to 220mA)
appears with device mirroring. This is because of the back-
ground process responsible of screencasting to the controller
which causes additional CPU usage on the device (Figure 4).
4.2 Demonstration
We demonstrate BatteryLab’s usage assuming an experimenter
asks the following question: which of today’s Android web-
browsers is the most energy efficient? The experimenter writes
an automation script which instruments a browser to load a
webpage and interact with it. Scripts are deployed via Bat-
teryLab’s Jenkins interface, and phone access is granted via
device mirroring in the experimenter’s browser. When satis-
fied with the automation, the experimenter can launch a real
test with active battery monitoring. The experiment is added
HotNets ’19, November 13–15, 2019, Princeton, NJ, USA Varvello et al.
Figure 4: CDF of CPU consumption
(Brave and Chrome).
to Jenkin’s queue and will run when the right conditions are
met, i.e., no other test is running (required) and low CPU
utilization (optional). When an experiment completes, logs
can be retrieved via the Jenkins interface.
We build browser automation using bash and BatteryLab’s
ADB over WiFi automation procedure. We automate a few
popular Android browsers: Chrome, Firefox, Edge, and Brave.
Our experiments are WiFi only since the device under test
is not rooted. Each browser is instrumented to sequentially
load
10
popular news websites. After a URL is entered, the
automation script waits 6 seconds – emulating a typical page
load time (PLT) for these websites under our (fast) network
conditions – and then interact with the page by executing
multiple “scroll up” and “scroll down” operations. Before the
beginning of a workload, the browser state is cleaned and the
required setup is done, e.g., Chrome requires at first launch
to accept some conditions, sign-in into an account or not, etc.
We iterate through each browser sequentially, and re-test each
browser 5 times. We repeat the full experiment with both
active and inactive device mirroring.
Browser Performance
Figure 3 shows the average battery
discharge (standard deviation as errorbars) measured for each
browser, considering both active and inactive device mirror-
ing. The figure shows that, regardless of device mirroring, the
overall result does not change, i.e., with Brave offering min-
imal battery consumption and Firefox consuming the most.
This is because device mirroring offers a constant extra cost
(
20mAh) regardless of the browser being tested. This result
is in line with the constant gap observed between active and
inactive device mirroring in Figure 2.
This additional battery consumption caused by device mir-
roring is due to an increase of the CPU load on the device
under test. Figure 4 shows the CDF of the CPU utilization
for Chrome and Brave with active and inactive device mir-
roring, respectively. A similar trend is observed for the other
Figure 5: CDF of CPU consumption at the controller
(Raspberry Pi 3B+).
browsers, which have been omitted to increase the plot visibil-
ity. The figure shows two results. First, Brave’s lower battery
consumption comes from an overall lower CPU pressure, e.g.,
a median CPU utilization of 12% versus 20% in Chrome.
Second, device mirroring causes, for both browsers, a 5%
CPU increase. This is more noticeable at higher CPU values
which is when the browser’s automation is active. This hap-
pens because of the increasing load on the encoder when the
screen content changes quickly versus, for example, the fixed
phone’s home screen.
System Performance
Overall, higher CPU utilization is the
main extra cost caused by device mirroring (extra 50%, on
average). The impact on memory consumption is minimal
(extra 6%, on average). Overall, memory does not appear to
be an issue given less than 20% utilization of the Raspberry
Pi’s 1 GB. The networking demand is also minimal, with just
32 MB of upload traffic for a
7 minutes test. Note that we
set
scrcpy
’s video encoding (H.264) rate to 1 Mbps, which
produces an upper bound of about 50 MB. The lower value
depends on extra compression provided by noVNC.
Evaluating the responsiveness of device mirroring is chal-
lenging. We call latency the time between when an action
is requested, either via automation or a click in the browser,
and when the consequence of this action is displayed back
in the browser, after being executed on the device. This de-
pends on many factors like network latency (between browser
and test device), load on device and/or controller, and soft-
ware optimizations. We estimate such latency recording audio
(44,100 Hz) and video (60 fps) while interacting with the
device via the browser. We then manually annotated the video
using ELAN multimedia annotator software [
35
] and com-
pute the latency as the time between a mouse click (identified
via sound) and the first frame with a visual change in the app.
We repeat this test 40 times while co-located with the vantage
BatteryLab HotNets ’19, November 13–15, 2019, Princeton, NJ, USA
Speedtest Server (kms) D (Mbps) U (Mbps) L (ms)
South Africa
Johannesburg (3.21) 6.26 9.77 222.04
China
Hong Kong (4.86) 7.64 7.77 286.32
Japan
Bunkyo (2.21) 9.68 7.76 239.38
Brazil
Sao Paulo (8.84) 9.75 8.82 235.05
CA, USA
Santa Clara (7.99) 10.63 14.87 215.16
Table 2: ProtonVPN statistics. D=down/U=up/L=RTT.
point (1 ms network latency) and measure an average latency
of 1.44 (±0.12) sec.
Next, we dig deeper into CPU utilization at the controller.
Figure 4 shows the CDF of the CPU utilization during the
Chrome experiments with active and inactive device mirror-
ing — no significant difference was observed for the other
browsers. When device mirroring is inactive, the controller
is mostly underloaded, i.e., constant CPU utilization at 25%.
This load is caused by the communication with the Monsoon
to pull battery readings at highest frequency. When device
mirroring is enabled, the median load instead increases to
about 75%. Further, in 10% of the measurements the load is
quite high and over 95%.
4.3 Location, Location, Location
BatteryLab’s distributed nature is both a feature and a ne-
cessity. It is a feature since it allows battery measurements
under diverse network conditions which is, to the best of our
knowledge, an unchartered research area. It is a necessity
since it is how the platform can scale without incurring high
costs. We here explore the impact of network location on
battery measurements. In the lack of multiple vantage points,
we emulate such network footprint via a VPN.
We acquired a basic subscription to ProtonVPN [
30
] and
set it up at the controller. We then choose 5 locations where
to tunnel our tests. Table 2 summarizes the locations, along
with network measurements from SpeedTest (upload and
download bandwidth, latency). VPN vantage points are sorted
by download bandwidth, with the South Africa node being
the slowest and the California node being the fastest. Since
the speedtest server is always within 10 km from each VPN
node, the latency here reported is mostly representative of the
network path between the vantage point and the VPN node.
Next, we extend the automation script to also activate a
VPN connection at the controller before testing. Figure 6
shows the average battery discharge (standard deviation as
errorbars) per VPN location and browser — for visibility rea-
sons and to bound the experiment duration, only Chrome and
Brave were tested. Overall, the figure does not show dramatic
differences among the battery measurements as a function
Figure 6: Brave and Chrome energy consumption mea-
sured through VPN tunnels.
of the network location. For example, while the available
bandwidth almost double between South Africa and Califor-
nia, the average discharge variation stays between standard
deviation bounds. This is encouraging for experiments where
BatteryLab’s distributed nature is a necessity and its noise
should be minimized.
Figure 6 also shows an interesting trend when comparing
Brave and Chrome when tested via the Japanese VPN node.
In this case, Brave’s energy consumption is in line with the
other nodes, while Chrome’s is minimized. This is due to a
significant (20%) drop in bandwidth usage by Chrome, due to
a systematic reduction in the overall size of ads shown at this
location. This is an interesting result for experiments where
BatteryLab’s distributed nature is a feature.
5 CONCLUSION AND FUTURE WORK
In this paper we have proposed BatteryLab, a distributed mea-
surement platform for battery measurements. We have also
started building and experimenting with BatteryLab, to the
point that our system is ready to accept new members. We
specifically focused on Android because of ease of integration
and availability of testing tools. However, we discussed iOS
solutions which we soon plan to experiment with. Similarly,
while we focus on mobile devices there is no fundamental
constraint which would not allow BatteryLab to support lap-
tops or IoT devices. We designed BatteryLab to enable remote
access and human-controlled tests; we plan to facilitate such
tests via integration with platforms like Mechanical Turk [
4
]
and Figure Eight [
13
]. Our vision is an open source and open
access platform that users can join by sharing resources. How-
ever, we anticipate potential access via a credit system for
experimenters lacking the resources for the initial setup.
ACKNOWLEDGMENTS
Katevas and Haddadi were partially supported by the EPSRC
Databox and DADA grants (EP/N028260/1, EP/R03351X/1).
HotNets ’19, November 13–15, 2019, Princeton, NJ, USA Varvello et al.
REFERENCES
[1]
Ö. Alay, A. Lutu, M. Peón-Quirós, V. Mancuso, T. Hirsch, K. Evensen,
A. Hansen, S. Alfredsson, J. Karlsson, A. Brunstrom, et al. Experi-
ence: An open platform for experimentation with commercial mobile
broadband networks. In Proc. ACM MobiCom, pages 70–78, 2017.
[2]
M. Almeida, M. Bilal, A. Finamore, I. Leontiadis, Y. Grunenberger,
M. Varvello, and J. Blackburn. Chimp: Crowdsourcing human inputs
for mobile phones. In Proc. of WWW, pages 45–54, 2018.
[3]
Amazon Inc. A reliable and cost-effective way to route end users to
Internet applications. https://aws.amazon.com/route53/.
[4] Amazon Inc. Amazon Mechanical Turk. https://www.mturk.com/.
[5] Amazon Inc. AWS Device Farm.
https://aws.amazon.com/device-farm/.
[6]
Apple Inc. How to AirPlay video and mirror your device’s screen.
https://support.apple.com/HT204289.
[7] Apple Inc. XCTest - Apple Developer Documentation.
https://developer.apple.com/documentation/xctest.
[8] BatteryLab. A Distributed Platform for Battery Measurements.
https://batterylab.dev.
[9] BatteryLab. Batterylab tutorial for new members.
https://batterylab.dev/tutorial/blab-tutorial.pdf.
[10]
D. H. Bui, Y. Liu, H. Kim, I. Shin, and F. Zhao. Rethinking energy-
performance trade-off in mobile web page loading. In Proc. ACM
MobiCom, 2015.
[11]
Y. Cao, J. Nejati, M. Wajahat, A. Balasubramanian, and A. Gandhi.
Deconstructing the energy consumption of the mobile page load. Proc.
of the ACM on Measurement and Analysis of Computing Systems,
1(1):6:1–6:25, June 2017.
[12]
X. Chen, N. Ding, A. Jindal, Y. C. Hu, M. Gupta, and R. Vannithamby.
Smartphone energy drain in the wild: Analysis and implications. In
Proc. ACM SIGMETRICS, 2015.
[13] Figure Eight. The Essential Data Annotation Platform.
https://www.figure-eight.com.
[14] A. Geniola. Simple Python library for Meross devices.
https://github.com/albertogeniola/MerossIot.
[15] Genymobile. Display and control your Android device.
https://github.com/Genymobile/scrcpy.
[16] Google Inc. Android Debug Bridge.
https://developer.android.com/studio/command-line/adb.
[17] Google Inc. Android Developers - Automate user interface tests.
https://developer.android.com/training/testing/ui-testing.
[18] Greenspector. Test in the cloud with real mobile devices.
https://greenspector.com/en/.
[19]
C. Hwang, S. Pushp, C. Koh, J. Yoon, Y. Liu, S. Choi, and J. Song.
Raven: Perception-aware optimization of power consumption for mo-
bile games. In Proc. ACM MobiCom, 2017.
[20]
Jenkins. The leading open source automation server. https://jenkins.io/.
[21] Let’s Encrypt. A a free, automated, and open Certificate Authority.
https://letsencrypt.org.
[22] Microsoft, Visual Studio. App Center is mission control for apps.
https://appcenter.ms/sign-in.
[23]
Mobile Enerlytics. The Leader In Automated App Testing Innovations
To Reduce Battery Drain. http://mobileenerlytics.com/.
[24]
MONROE - H2020-ICT-11-2014. Measuring Mobile Broadband Net-
works in Europe. https://www.monroe-project.eu/wp-content/uploads/
2017/12/Deliverable-D2.2-Node- Deployment.pdf.
[25] Monsoon Solutions Inc. High voltage power monitor.
https://www.msoon.com.
[26] Monsoon Solutions Inc. Monsoon Power Monitor Python Library.
https://github.com/msoon/PyMonsoon.
[27] Mvp - github. uhubctl - USB hub per-port power control.
https://github.com/mvp/uhubctl.
[28]
noVNC. A VNC client JavaScript library as well as an application built
on top of that library. https://novnc.com.
[29]
PlanetLab. An open platform for developing, deploying, and accessing
planetary-scale services. https://www.planet-lab.org/.
[30] ProtonVPN. High-speed Swiss VPN that safeguards your privacy.
https://protonvpn.com/.
[31]
Raspberry Pi. Raspberry Pi 3 Model B+. https://www.raspberrypi.org/
products/raspberry-pi-3-model-b- plus/.
[32]
A. Schulman, T. Schmid, P. Dutta, and N. Spring. Phone power moni-
toring with battor. In Proc. ACM MobiCom, 2011.
[33]
N. Thiagarajan, G. Aggarwal, A. Nicoara, D. Boneh, and J. P. Singh.
Who killed my battery?: Analyzing mobile browser energy consump-
tion. In Proc. of WWW, 2012.
[34]
TigerVNC. A high-performance, platform-neutral implementation of
VNC (Virtual Network Computing). https://tigervnc.org.
[35]
P. Wittenburg, H. Brugman, A. Russel, A. Klassmann, and H. Sloetjes.
Elan: a professional framework for multimodality research. In Proc. of
LREC, volume 2006, page 5th, 2006.
... This protocol is universally recognized by all devices [17]. It is designed to establish a dependable connection, accepting a data stream within local processes, breaking it down into segments not exceeding 64 Kbytes, and transmitting each segment as a separate ip datagram. ...
Article
Full-text available
El siguiente documento presenta un dispositivo de conectividad inalámbrica diseñado específicamente para el sector educativo, que emplea tecnología IoT y Raspberry Pi. Su objetivo principal es ofrecer a las instituciones educativas una solución asequible que permita realizar presentaciones en el aula sin problemas y sin comprometer la calidad ni la facilidad de uso.Utilizando una metodología experimental, configuramos un ordenador monoplaca Raspberry Pi, transformando este dispositivo en una alternativa práctica a un ordenador de sobremesa tradicional. Esta configuración permite la reproducción de varios tipos de documentos, incluidos Word, Excel, PowerPoint y Publisher, ofreciendo a educadores y estudiantes una herramienta versátil para satisfacer sus necesidades académicas. La solución propuesta funciona a la perfección: mediante el uso de una aplicación móvil gratuita, los contenidos pueden transmitirse sin esfuerzo a dispositivos externos, como proyectores, televisores y monitores. De este modo se aborda un problema educativo cuyo principal inconveniente radica en el coste de los equipos.En última instancia, el objetivo no es sólo superar las barreras financieras en la educación, sino también impulsar la exploración innovadora para mejorar la conectividad y la automatización en diversos entornos.
... To handle big numbers, we use the high-performance Rust library of Arithmetic in Multiple Precision (RAMP) [29]. We perform a series of measurements on the power discharge (in mAh) of the battery of three Google Pixel devices (Table III), using the BatteryLab infrastructure [44] that operationalises a Monsoon High Voltage Power Monitor [32]. Additionally, we measured the CPU utilization (in %) and the end-to-end latency (in sec) of the smallest possible synergy. ...
Preprint
Distributed (or Federated) learning enables users to train machine learning models on their very own devices, while they share only the gradients of their models usually in a differentially private way (utility loss). Although such a strategy provides better privacy guarantees than the traditional centralized approach, it requires users to blindly trust a centralized infrastructure that may also become a bottleneck with the increasing number of users. In this paper, we design and implement P4L: a privacy preserving peer-to-peer learning system for users to participate in an asynchronous, collaborative learning scheme without requiring any sort of infrastructure or relying on differential privacy. Our design uses strong cryptographic primitives to preserve both the confidentiality and utility of the shared gradients, a set of peer-to-peer mechanisms for fault tolerance and user churn, proximity and cross device communications. Extensive simulations under different network settings and ML scenarios for three real-life datasets show that P4L provides competitive performance to baselines, while it is resilient to different poisoning attacks. We implement P4L and experimental results show that the performance overhead and power consumption is minimal (less than 3mAh of discharge).
... The power meter enables voltage up to 13.5V and provides up to 6A of continuous current, at 5KHz sampling rate. It is considered a good candidate for this type of measures (total energy consumption) given its precision and previous usage in the field in similar scientific studies [18] [34]. We measure the current consumed over a period of time (profiling time), given an input value for voltage. ...
... We have released WPM as a web application integrated with BatteryLab which offers such testing capabilities to the public, in real time. This paper extends our previously published work [44] in many ways: ...
Preprint
Full-text available
Advances in cloud computing have simplified the way that both software development and testing are performed. This is not true for battery testing for which state of the art test-beds simply consist of one phone attached to a power meter. These test-beds have limited resources, access, and are overall hard to maintain; for these reasons, they often sit idle with no experiment to run. In this paper, we propose to share existing battery testbeds and transform them into vantage points of BatteryLab, a power monitoring platform offering heterogeneous devices and testing conditions. We have achieved this vision with a combination of hardware and software which allow to augment existing battery test-beds with remote capabilities. BatteryLab currently counts three vantage points, one in Europe and two in the US, hosting three Android devices and one iPhone 7. We benchmark BatteryLab with respect to the accuracy of its battery readings, system performance, and platform heterogeneity. Next, we demonstrate how measurements can be run atop of BatteryLab by developing the "Web Power Monitor" (WPM), a tool which can measure website power consumption at scale. We released WPM and used it to report on the energy consumption of Alexa's top 1,000 websites across 3 locations and 4 devices (both Android and iOS).
... Thus any particular connectivity/peering of the Azure network might influence our cloud experiments. Ideally, the testbed should be deployed across multiple cloud providers to mitigate any artifact of a single provider, or even encompass distributed edge-based platforms provisioned across heterogeneous access networks (e.g., residential [20,38], campus [2] and enterprise networks [11,12]). In fact, moving the evaluation platform to the edge would allow us to extend the list of target videoconferencing systems to study. ...
Preprint
Since the outbreak of the COVID-19 pandemic, videoconferencing has become the default mode of communication in our daily lives at homes, workplaces and schools, and it is likely to remain an important part of our lives in the post-pandemic world. Despite its significance, there has not been any systematic study characterizing the user-perceived performance of existing videoconferencing systems other than anecdotal reports. In this paper, we present a detailed measurement study that compares three major videoconferencing systems: Zoom, Webex and Google Meet. Our study is based on 48 hours' worth of more than 700 videoconferencing sessions, which were created with a mix of emulated videoconferencing clients deployed in the cloud, as well as real mobile devices running from a residential network. We find that the existing videoconferencing systems vary in terms of geographic scope, which in turns determines streaming lag experienced by users. We also observe that streaming rate can change under different conditions (e.g., number of users in a session, mobile device status, etc), which affects user-perceived streaming quality. Beyond these findings, our measurement methodology can enable reproducible benchmark analysis for any types of comparative or longitudinal study on available videoconferencing systems.
Article
Since the outbreak of the COVID-19 pandemic, videoconferencing has become the default mode of communication in our daily lives at homes, workplaces and schools, and it is likely to remain an important part of our lives in the post-pandemic world. Despite its significance, there has not been any systematic study characterizing the user-perceived performance of existing videoconferencing systems other than anecdotal reports. In this paper, we present a detailed measurement study that compares three major videoconferencing systems: Zoom, Webex and Google Meet. Our study is based on 62 hours’ worth of more than 1.1K videoconferencing sessions, which were created with a mix of emulated videoconferencing clients deployed in the cloud, as well as real mobile devices running from a residential network over two separate periods with nine months apart. We find that the existing videoconferencing systems vary in terms of geographic scope and resource provisioning strategies, which in turns determine streaming lag experienced by users. We also observe that streaming rate can change under different conditions (e.g., available bandwidth, number of users in a session, mobile device status), which affects user-perceived streaming quality. Beyond these findings, our measurement methodology enables reproducible benchmark analysis for any types of comparative or longitudinal study on available videoconferencing systems.
Chapter
Advances in cloud computing have simplified the way that both software development and testing are performed. This is not true for battery testing for which state of the art test-beds simply consist of one phone attached to a power meter. These test-beds have limited resources, access, and are overall hard to maintain; for these reasons, they often sit idle with no experiment to run. In this paper, we propose to share existing battery testbeds and transform them into vantage points of BatteryLab, a power monitoring platform offering heterogeneous devices and testing conditions. We have achieved this vision with a combination of hardware and software which allow to augment existing battery test-beds with remote capabilities. BatteryLab currently counts three vantage points, one in Europe and two in the US, hosting three Android devices and one iPhone 7. We benchmark BatteryLab with respect to the accuracy of its battery readings, system performance, and platform heterogeneity. Next, we demonstrate how measurements can be run atop of BatteryLab by developing the “Web Power Monitor” (WPM), a tool which can measure website power consumption at scale. We released WPM and used it to report on the energy consumption of Alexa’s top 1,000 websites across 3 locations and 4 devices (both Android and iOS).KeywordsBatteryTest-bedPerformanceAndroidiOS
Conference Paper
Full-text available
While developing mobile apps is becoming easier, testing and characterizing their behavior is still hard. On the one hand, the de facto testing tool, called "Monkey," scales well due to being based on random inputs, but fails to gather inputs useful in understanding things like user engagement and attention. On the other hand, gathering inputs and data from real users requires distributing instrumented apps, or even phones with pre-installed apps, an expensive and inherently unscaleable task. To address these limitations we present CHIMP, a system that integrates automated tools and large-scale crowdsourced inputs. CHIMP is different from previous approaches in that it runs apps in a virtualized mobile environment that thousands of users all over the world can access via a standard Web browser. CHIMP is thus able to gather the full range of real-user inputs, detailed run-time traces of apps, and network traffic. We thus describe CHIMP»s design and demonstrate the efficiency of our approach by testing thousands of apps via thousands of crowdsourced users. We calibrate CHIMP with a large-scale campaign to understand how users approach app testing tasks. Finally, we show how CHIMP can be used to improve both traditional app testing tasks, as well as more novel tasks such as building a traffic classifier on encrypted network flows.
Article
Full-text available
Modeling the energy consumption of applications on mobile devices is an important topic that has received much attention in recent years. However, there has been very little research on modeling the energy consumption of the mobile Web. This is primarily due to the short-lived yet complex page load process that makes it infeasible to rely on coarse-grained resource monitoring for accurate power estimation. We present RECON, a modeling approach that accurately estimates the energy consumption of any Web page load and deconstructs it into the energy contributions of individual page load activities. Our key intuition is to leverage low-level application semantics in addition to coarse-grained resource utilizations for modeling the page load energy consumption. By exploiting fine-grained information about the individual activities that make up the page load, RECON enables fast and accurate energy estimations without requiring complex models. Experiments across 80 Web pages and under four different optimizations show that RECON can estimate the energy consumption for a Web page load with an average error of less than 7%. Importantly, RECON helps to analyze and explain the energy effects of an optimization on the individual components of Web page loads.
Conference Paper
Full-text available
Open experimentation with operational Mobile Broadband (MBB) networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need of innovative solutions for mobile communications. Even more, there is a strong need for objective data about stability and performance of MBB (e.g., 3G/4G) networks, and for tools that rigorously and scientifically assess their status. In this paper, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements and custom experimentation on operational MBB networks. The MONROE platform enables accurate, realistic and meaningful assessment of the performance and reliability of 11 MBB networks in Europe. We report on our experience designing, implementing and testing the solution we propose for the platform. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. We describe and exemplify the capabilities of the platform and the wide variety of experiments that external users already perform using the system.
Article
Full-text available
Mobile Web page performance is critical to content providers, service providers, and users, as Web browsers are one of the most popular apps on phones. Slow Web pages are known to adversely affect profits and lead to user abandonment. While improving mobile web performance has drawn increasing attention, most optimizations tend to overlook an important factor, energy. Given the importance of battery life for mobile users, we argue that web page optimizations should be evaluated for their impact on energy consumption. However, examining the energy effects of a web optimization is challenging, even if one has access to power monitors, for several reasons. First, the page load process is relatively short-lived, ranging from several milliseconds to a few seconds. Fine-grained resource monitoring on such short timescales to model energy consumption is known to incur substantial overhead. Second, Web pages are complex. A Web enhancement can have widely varying effects on different page load activities. Thus, studying the energy impact of a Web enhancement on page loads requires understanding its effects on each page load activity. Existing approaches to analyzing mobile energy typically focus on profiling and modeling the resource consumption of the device during execution. Such approaches consider long-running services and apps such as games, audio, and video streaming, for which low-overhead, coarse-grained resource monitoring suffices. For page loads, however, coarse-grained resource monitoring is not sufficient to analyze the energy consumption of individual, short-lived, page load activities. We present RECON (REsource- and COmpoNent-based modeling), a modeling approach that addresses the above challenges to estimate the energy consumption of any Web page load. The key intuition behind RECON is to go beyond resource-level information and exploit application-level semantics to capture the individual Web page load activities. Instead of modeling the energy consumption at the full page load level, which is too coarse grained, RECON models at a much finer component level granularity. Components are individual page load activities such as loading objects, parsing the page, or evaluating JavaScript. To do this, RECON combines coarse-grained resource utilization and component-level Web page load information available from existing tools. During the initial training stage, RECON uses a power monitor to measure the energy consumption during a set of page load processes and juxtaposes this power consumption with coarse-grained resource and component information. RECON uses both simple linear regression and more complex neural networks to build a model of the power consumption as a function of the resources used and the individual page load components, thus providing benefits over individual models. Using the model, RECON can estimate the energy consumption of any Web page loaded as-is or upon applying any enhancement, without the monitor. We experimentally evaluate RECON on the Samsung Galaxy S4, S5, and Nexus devices using 80 Web pages. Comparisons with actual power measurements from a fine-grained power meter show that, using the linear regression model, RECON can estimate the energy consumption of the entire page load with a mean error of 6.3% and that of individual page load activity segments with a mean error of 16.4%. When trained as a neural network, RECON's mean error for page energy estimation reduces to 5.4% and the mean segment error is 16.5%. We show that RECON can accurately estimate the energy consumption of a Web page under different network conditions, such as lower bandwidth or higher RTT, even when the model is trained under a default network condition. RECON also accurately estimates the energy consumption of a Web page after applying popular Web enhancements including ad blocking, inlining, compression, and caching.
Article
Full-text available
Utilization of computer tools in linguistic research has gained importance with the maturation of media frameworks for the handling of digital audio and video. The increased use of these tools in gesture, sign language and multimodal interaction studies has led to stronger requirements on the flexibility, the efficiency and in particular the time accuracy of annotation tools. This paper describes the efforts made to make ELAN a tool that meets these requirements, with special attention to the developments in the area of time accuracy. In subsequent sections an overview will be given of other enhancements in the latest versions of ELAN, that make it a useful tool in multimodality research.
Conference Paper
High-end mobile GPUs are now becoming an integral part of mobile devices. However, a mobile GPU constitutes a major portion of power consumption on the devices, and mobile games top as the most popular class of graphics applications. This paper presents the design and implementation of RAVEN, a novel, on-the-fly frame rate scaling system for mobile gaming applications. RAVEN utilizes human visual perception of graphics change to opportunistically achieve power saving without degrading user experiences. The system develops a light-weight frame comparison technique to measure and predict perception-aware frame similarity. It also builds a low resolution virtual display which clones the device screen for performing similarity measurement at a low-power cost. It is able to work on an existing commercial smartphone and support applications from app stores without any modifications. It has been implemented on Nexus 5X, and its performance has been measured with 13 games. The system effectively reduces the overall power consumption of mobile devices while maintaining satisfactory user experiences. The power consumption is reduced by 21.78% on aver-age and up to 34.74%.
Article
Excerpted from "Rethinking Energy-Performance Trade-Off in Mobile Web Page Loading," from Proceedings of the 21st Annual International Conference on Mobile Computing and Networking with permission. http://dl.acm.org/citation.cfm?id=2790103 © ACM 2015. Web browsers are one of the core applications on smartphones and other mobile devices, such as tablets. However, web browsing, particularly web page loading, is of high energy consumption as mobile browsers are largely optimized for performance and thus impose a significant burden on power-hungry mobile devices. With the advent of modern web capabilities, websites even become more complex and energy demanding. In the meantime, slow progress in battery technology constrains battery budget for mobile devices. As users are more aware of energy consumption of apps, it is desirable to improve the energy efficiency of web browsing, particularly web page loading.
Conference Paper
Web browsing is a key application on mobile devices. However, mobile browsers are largely optimized for performance, imposing a significant burden on power-hungry mobile devices. In this work, we aim to reduce the energy consumed to load web pages on smartphones, preferably without increasing page load time and compromising user experience. To this end, we first study the internals of web page loading on smartphones and identify its energy-inefficient behaviors. Based on our findings, we then derive general design principles for energy-efficient web page loading, and apply these principles to the open-source Chromium browser and implement our techniques on commercial smartphones. Experimental results show that our techniques are able to achieve a 24.4% average system energy saving for Chromium on a latest-generation big.LITTLE smartphone using WiFi (a 22.5% saving when using 3G), while not increasing average page load time. We also show that our proposed techniques can bring a 10.5% system energy saving on average with a small 1.69\% increase in page load time for mobile Firefox web browser. User study results indicate that such a small increase in page load time is hardly perceivable.
Article
Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.