Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich MIT CSAIL
Presented by
Chandra Sekhar Sarma Akella Rutvij Talavdekar
Introduction
This paper asks whether traditional kernel designs can be used and implemented in a way that allows applications to scale
.
Analyze scaling a number of applications (MOSBENCH) on Linux running with a 48-core machine
Exim Memcached Apache PostgreSQL gmake the Psearchy file indexer MapReduce library
What is Scalability?
Application does N times as much work on N cores as it could on 1 core However, that is not the case, due to serial parts of the code
Scalability may be better understood by Amdahl's Law
Amdahl's Law
Identifies performance gains from adding additional cores to an application that has both serial and parallel components Serial portion of an application has disproportionate effect on performance gained by adding additional cores
As N approaches infinity, speedup approaches 1 / S (e.g. S = 10%) If 25% of a program is Serial component, adding any number of cores cannot provide speedup of more than 4
Description of Services
Mail Server Uses lot of forks for each incoming SMTP connection and Message delivery Distributed memory caching system Stresses Network stack Web Server has a process per Instance, stresses Network stack, File system Open Source SQL DB uses shared data structures, kernel locking interfaces, uses TCP sockets
Many applications spend considerable amount of their CPU execution time in the kernel These applications should scale with more cores If OS kernel doesn't scale, apps won't scale.
Points to consider:
How serious are the scaling problems? Do they have alternatives? How hard is it to fix them?
Contribution
Analysis of Linux scalability for 7 real system intensive applications Stock Linux limits scalability Analysis of bottlenecks
Fixes: 3002 lines of code, 16 patches Most fixes improve scalability of multiple apps Fixes made in the Kernel, minor fixes in Applications and certain changes in the way Applications use Kernel services Remaining bottlenecks were either due to shared Hardware resources or in the application
Result: Patched Kernel No kernel problems up to 48 cores, with fixes applied Except for Sloppy Counters, most fixes were applications of Standard Parallel Programming Techniques
MOSBENCH
Application Exim (mail server) Percentage time spent in kernel Bottleneck Process creation and small file creation and 69% deletion 80% 60% Upto 82% Upto 7.60% Upto 23% Packet processing in network stack Network stack, File system (directory name lookup) Kernel locking interfaces, network interfaces, application's internal shared data structures File system read/writes to multiple files CPU intensive, file system read/writes
PostgreSQL (database)
gmake (parallel build) Psearchy (file indexer) Metis (mapreduce library)
Upto 16%
48 Core Server
Comprises of 8 AMD Opteron chips with 6 cores on each chip
Each core has private 64 KB L1 cache (access in 3 CPU cycles) and a private 512 KB L2 cache (14 cycles) 6 Cores on each chip share a 6 MB L3 cache (28 cycles)
struct vfsmount *lookup_mnt(struct path *path) { struct vfsmount *mnt; spin_lock(&vfsmount_lock); mnt = hash_get(mnts, path); spin_unlock(&vfsmount_lock); return mnt; }
spin_lock and spin_unlock use many more cycles (~400-5000) than the critical section (~in 10s) in multi-core system
Multiple cores spend more time on lock contention and congesting interconnect with lock acquisition requests, invalidations
Common case: Cores access per-core tables for metadata of mount point Modify mount table: invalidate per-core tables
Reading the reference count is slow These counters can become bottlenecks if many cores update them Reading the reference count delays memory operations from other cores A central Reference Count means waiting and contention for locks, cache coherency serialization
Reading the reference count delays Memory Operations from other cores
Scalability Issues
5 scalability issues are the cause of most bottlenecks: 1) Global lock used for a shared data structure : More cores longer lock wait time 2) Shared memory location : More cores overhead caused by the cache coherency algorithms 3) Tasks compete for limited size-shared hardware cache : More cores increased cache miss rates 4) Tasks compete for shared hardware resources (interconnects, DRAM interfaces) More cores more time wasted waiting 5) Too few available tasks : More cores less efficiency These issues can often be avoided (or limited) using popular parallel programming techniques
Lock-free algorithms
ensures that threads competing for a shared resource do not have their execution indefinitely postponed by mutual exclusion
Avoids Lock Contention, Cache Coherency serialization and slashes lock acquisition wait times Cores now query the per core data structure rather than looking up the central data structure, and avoid lock contention and serialization over the shared data structure
Summary of Changes
3002 lines of changes to the kernel 60 lines of changes to the applications Per-core Data Structures and Sloppy Counters provide across the board improvements to 3 of the 7 applications
Limitations
Results limited to 48 cores and small set of Applications Results may vary on different number of cores, and different set of applications Concurrent modifications to address space In-memory Temporary File storage system used instead of Disk or I/O 48-core AMD machine single 48-core chip
Current bottlenecks
Kernel code is not the bottleneck Further kernel changes might help applications or hardware
Conclusion
Stock Linux has scalability problems They are easy to fix or avoid up to 48 cores Bottlenecks can be fixed to improve scalability Linux Communities can provide better support in this regard In the context of 48 cores, no need to relook at Operating Systems and explore newer Kernel designs
References
Original Paper (pdos.csail.mit.edu/papers/linux:osdi10.pdf ) Original Presentation (usenix.org) VFSMount (http://lxr.free-electrons.com/ident?i=vfsmount) MOSBENCH (pdos.csail.mit.edu/mosbench/ ) Usenix (https://www.usenix.org/events/osdi10/tech/slides/boyd-wickizer.pdf) ACM Library (http://dl.acm.org/citation.cfm?id=1924944) Information Week (http://www.informationweek.com) Wikipedia (tmpfs, Sloppy Counters) University of Illinois (Sloppy Counters) University College London (Per Core Data structures)
THANK YOU !