Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

With the advancements in cloud technologies, serverless applications are growing in popularity. Server- less tools enable developers to focus on code without worrying about the heavy lifting of server management, load balancing & scaling applications to meet growing traffic. Intel CPUs have been dominating the cloud market for years, however, AMD is making waves with their new chips powered by Zen Architecture, thereby creating a heterogeneity in cloud servers. This heterogeneity makes it difficult for developers to improve the performance of the running code for specific hardware. Generally, cloud providers are neutral to workload types and do not have a mechanism to automatically choose hardware to ensure maximum performance. This is an issue as running the code on different machines can create unpredictable performance differences and unreliable changes in costs incurred. In my thesis, I measure the performance of nine serverless functions on the two CPU chips by Intel and AMD, where each app stresses one or more aspects of a system. We emulate a serverless environment (using Apache OpenWhisk) on these CPUs to run the applications and use Linux tools like "perf" to measure their performance. I found performance of an application is governed by two major factors - namely, (1) the type of optimization flags used and (2) the number of cores utilized by the running code. I confirm my findings by benchmarking industry standard benchmarks under four different configuration options. Identifying performance differences on these platforms can help cloud providers improve their service offerings by allowing them to efficiently provision their servers based on their applications workload types.

Details

PDF

Statistics

from
to
Export
Download Full History