资源预览内容
第1页 / 共37页
第2页 / 共37页
第3页 / 共37页
第4页 / 共37页
第5页 / 共37页
第6页 / 共37页
第7页 / 共37页
第8页 / 共37页
第9页 / 共37页
第10页 / 共37页
亲,该文档总共37页,到这儿已超出免费预览范围,如果喜欢就下载吧!
资源描述
Distributed Computing SeminarLecture 1: Introduction to Distributed Computing & Systems BackgroundChristophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007Except where otherwise noted, the contents of this presentation are Copyright 2007 University of Washington and are licensed under the Creative Commons Attribution 2.5 License.Course Overview5 lectures1 Introduction2 Technical Side: MapReduce & GFS2 Theoretical: Algorithms for distributed computingReadings + Questions nightlyOutlineIntroduction to Distributed ComputingParallel vs. Distributed ComputingHistory of Distributed ComputingParallelization and SynchronizationNetworking BasicsComputer SpeedupMoores Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965)Image: Toms Hardware and not subject to the Creative Commons license applicable to the rest of this work. Image: Toms HardwareScope of problemsWhat can you do with 1 computer?What can you do with 100 computers?What can you do with an entire data center?Distributed problemsRendering multiple frames of high-quality animationImage: DreamWorks Animation and not subject to the Creative Commons license applicable to the rest of this work.Distributed problemsSimulating several hundred or thousand characters Happy Feet Kingdom Feature Productions; Lord of the Rings New Line Cinema, neither image is subject to the Creative Commons license applicable to the rest of the work. Distributed problemsIndexing the web (Google)Simulating an Internet-sized network for networking experiments (PlanetLab)Speeding up content delivery (Akamai)What is the key attribute that all these examples have in common?Parallel vs. DistributedParallel computing can mean:Vector processing of dataMultiple CPUs in a single computerDistributed computing is multiple CPUs across many computers over the networkA Brief History1975-85Parallel computing was favored in the early yearsPrimarily vector-based at firstGradually more thread-based parallelism was introducedImage: Computer Pictures Database and Cray Research Corp and is not subject to the Creative Commons license applicable to the rest of this work.“Massively parallel architectures” start rising in prominenceMessage Passing Interface (MPI) and other libraries developedBandwidth was a big problemA Brief History1985-95A Brief History1995-TodayCluster/grid architecture increasingly dominantSpecial node machines eschewed in favor of COTS technologiesWeb-wide cluster softwareCompanies like Google take this to the extremeParallelization & SynchronizationParallelization IdeaParallelization is “easy” if processing can be cleanly split into n units:Parallelization Idea (2)In a parallel computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.Parallelization Idea (3)Parallelization Idea (4)Parallelization PitfallsBut this model is too simple! How do we assign work units to worker threads?What if we have more work units than threads?How do we aggregate the results at the end?How do we know all the workers have finished?What if the work cannot be divided into completely separate tasks?What is the common theme of all of these problems?Parallelization Pitfalls (2)Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource.Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!What is Wrong With This?Thread 1:void foo() x+; y = x;Thread 2:void bar() y+; x+=3;If the initial state is y = 0, x = 6, what happens after these threads finish running?Multithreaded = UnpredictabilityWhen we run a multithreaded program, we dont know what order threads run in, nor do we know when they will interrupt one another.Thread 1:void foo() eax = memx; inc eax; memx = eax; ebx = memx; memy = ebx;Thread 2:void bar() eax = memy; inc eax; memy = eax; eax = memx; add eax, 3; memx = eax;Many things that look like “one step” operations actually take several steps under the hood:Multithreaded = UnpredictabilityThis applies to more than just integers:Pulling work units from a queueReporting work back to master unitTelling another thread that it can begin the “next phase” of processing All require synchronization!Synchronization PrimitivesA synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically. Hardware support guarantees that operations on synchronization primitives only ever take one stepSemaphoresA semaphore is a flag that can be raised or lowered in one stepSemaphores were flags that railroad engineers would use when entering a shared trackOnly one side of the semaphore can ever be red! (Can both be green?)Semaphoresset() and reset() can be thought of as lock() and unlock()Calls to lock() when the semaphore is already locked cause the thread to block.Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctlyThe “corrected” exampleThread 1:void foo() sem.lock(); x+; y = x; sem.unlock();Thread 2:void bar() sem.lock(); y+; x+=3; sem.unlock();Global var “Semaphore sem = new Semaphore();” guards access to x & yCondition VariablesA condition variable notifies threads that a particular condition has been met Inform another thread that a queue now contains elements to pull from (or that its empty request more elements!)Pitfall: What if nobodys listening?The final exampleThread 1:void foo() sem.lock(); x+; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify();Thread 2:void bar() sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y+; x+=3; sem.unlock();Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;Too Much Synchronization? DeadlockSynchronization becomes even more complicated when multiple locks can be usedCan cause entire system to “get stuck”Thread A:Thread A:semaphore1.lock();semaphore2.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();Thread B:semaphore2.lock();semaphore1.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();(Image: RPI CSCI.4210 Operating Systems notes)The Moral: Be Careful!Synchronization is hardNeed to consider all possible shared stateMust keep locks organized and use them consistently and correctlyKnowing there are bugs may be tricky; fixing them can be even worse!Keeping shared state to a minimum reduces total system complexityFundamentals of NetworkingSockets: The Internet = tubes?A socket is the basic network interfaceProvides a two-way “pipe” abstraction between two applicationsClient creates a socket, and connects to the server, who receives a socket representing the other sidePortsWithin an IP address, a port is a sub-address identifying a listening programAllows multiple clients to connect to a server at onceWhat makes this work?Underneath the socket layer are several more protocolsMost important are TCP and IP (which are used hand-in-hand so often, theyre often spoken of as one protocol: TCP/IP)Even more low-level protocols handle how data is sent over Ethernet wires, or how bits are sent through the air using 802.11 wirelessWhy is This Necessary?Not actually tube-like “underneath the hood”Unlike phone system (circuit switched), the packet switched Internet uses many routes at onceNetworking IssuesIf a party to a socket disconnects, how much data did they receive? Did they crash? Or did a machine in the middle?Can someone in the middle intercept/modify our data?Traffic congestion makes switch/router topology important for efficient throughputConclusionsProcessing more data means using more machines at the same timeCooperation between processes requires synchronizationDesigning real distributed systems requires consideration of networking topologyNext time: How MapReduce works
网站客服QQ:2055934822
金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号