MyException - 我的异常网
当前位置:我的异常网» 软件设计 » Does Little'law really applicable to apply p

Does Little'law really applicable to apply performance model now

www.myexceptions.net  网友分享于:2015-02-08  浏览:0次
Does Little'law really applicable to apply performance model now?

上次提到Little定律, 我也解释过它跟另外一个公式有一些内在的关系,但是其实我自己对LL在当前复杂架构系统中到底是怎么应用的也没有完整的想法,于是我在Linkedin上把这个问题抛了出来,没有想到的是得到了很多大师级前辈回复,有些甚至非常详细:

我的问题:

Does Little'law really applicable to apply performance model now?


I'm confusing that the meaning of parameters of the formula N=X*R which N represent number of concurrent users, but X is not denote the rate of of people arrival, it is the rate of transaction in almost papers I read. So it means the user has only one transaction? And does this means the law only applicable to small project or study?

 

到目前为止全部回复:

  • Alexander Podelko

    Alexander

    Alexander Podelko

    Consulting Member of Technical Staff, Performance Engineering at Oracle

    People arrivals don't impact system, what they do does. For example, if you load a static home page and just sit there a couple of hours reading, the only request your system processed is one home page request. So the rate of real requests matters, not the number of online users (at least from the point of Little law - online users may hold some resources, which is a separate topic).

  • Neil Gunther

    Neil

    Neil Gunther

    Founder/Computer Scientist, Performance Dynamics

    The unstated assumption that you are missing is that the system is in steady state. In other words, the number of arrivals (A) and the number of completions (C) are the same value (on average).

    Since all computer systems are stochastic, steady state is true in the long run, i.e., during a long enough measurement period (T). Then, A =C and the arrival rate (λ = A/T) on the input side of the system will be equivalent to the completion rate (X = C/T) on the output side, i.e., λ = X. Performance engineers have a habit of calling the completion rate, the "throughput," for some reason.

    Thus, you can write LL the way you did, viz., N = X * R or equivalently, N = λ * R. LL says that although the number going in must equal the number coming out (in steady state), there can be another number, N, in the system that spend some time, R (the residence time), doing something in the system before departing. This how the checkout at a grocery store works: people can be arriving and departing, but there are also people waiting, as well as having their groceries rung up. LL tells you how many people will be in the queue if you know either their arrival rate or their departure rate and their residence time (on average).

    BTW, there are actually 3 ways of writing LL, 
    http://perfdynamics.blogspot.com/2014/07/a-little-triplet.html

  • Yong Fu

    Yong Fu

    Performance Test Engineer at ENNIU

    Thanks for your answers. 
    Neil's "A Little Triplet" gives me a heuristic thinking on this formula. What's the N represent is none of business of arrival rate but is determined by R. When R denote service time, the N denote the number of transaction of people who is serving in system. And if R denote resident time which contain thinking time, queuing time and service time, the N can be read as the online users, because the transaction has not thinking time except human. 
    Is that make sense for the understanding of LL? or give me a practical example or approach.

  • Neil Gunther

    Neil

    Neil Gunther

    Founder/Computer Scientist, Performance Dynamics

    Now you've shifted the question slightly, so we have to be very clear what we are talking about by adjusting the notation a little bit. 

    In my books and classes, I write the version of LL discussed above as Q = X * R or Q = λ * R, where Q means the total of all the processes/requests/transactions in the queues belonging to the system under test (SUT), for example. Then, I can use N to represent the total number of "users" or load generators (GEN). You also tossed a new term, "think time," which I write as Z. What is the relationship b/w these metrics? 

    N = X * R + X * Z = X * (R + Z). 

    Once again, LL tells the story. The total number of user (N) is composed of 2 parts. Why? Because every load test system is composed of 2 parts: the GEN component exerting the load on the system, and the SUT component exhibiting the response you are interested in measuring. In steady state, some portion of user requests are in the SUT, while the remainder are in the GEN side. 

    The 1st term, X * R, is simply Q, the number of requests either waiting or being serviced in the SUT. The 2nd term is not so obvious, but it's simply the number of requests not in the system. How do I know that? Because the time they spend on the GEN side is determined by the think time Z. And X * Z, according to LL, is that number (on average, in steady state). 

    Similarly, LL tells us that N can also be thought of in terms of the system throughput (X) times the total round-trip time (R+Z) in the test rig, i.e., time on the SUT + time on GEN. 

    Since LL is immutable, if the test measurements do *not* jive with LL, that's a way you can tell that something is wrong with the test setup. Of course, all good performance engineers always check their results, especially against Little's law. :D

  • Aravind Sai Kuchibhatla

    Aravind Sai

    Aravind Sai Kuchibhatla

    Performance test lead at one of the MNC's

    Based on my understanding, Little law states that Number of users/requests coming to the system/existing in the system is equal to rate at which they are entering the system multiplied by time they spend in the system. This time includes Response time, think time and Pacing time. 

    C=R*T 

    C - No. of users/requests 
    R - Rate at which they enter 
    T - Time(Resp. time + Think time + Pacing time) 

    It is possible that pacing time, think time doesn't exist at all in some cases...

  • Sarath Kumar Krishnan

    Sarath Kumar

    Sarath Kumar Krishnan

    Performance Architect, Capacity/Availability Planning Architect at Cognizant Technology Solutions

    This will need a lot of parameters to consider coming up with the performance model. 
    1) User count at peak hour = X 
    2) Transactions distribution = Y1, Y2, Y3,...., Yn 

    Pick up the top few transactions to make it close to 100% 

    Spread the X the Y1 to Yn. This will be your performance model. This again, will just give you one of the characteristics. You might have to try various other transaction mix to ensure all aspects are covered. 
    The choice of transactions is critical to cover the business requirement and help obtain the performance/capacity/availability/scalability view from an IT perspective. 

    If the system is already in live and you are testing for a functional modification, it's easy to get the usage and pacing from the existing live.

  • Yong Fu

    Yong Fu

    Performance Test Engineer at ENNIU

    Thanks All. 
    Neil's answer gives me a more comprehension for LL, but I still have another question: 
    we always divide system into two types: open system which represent by arrival rate , close system which represent by think time(maybe there is the 3rd type, but I don't want to talk about here). And from above knowledge, LL can contain arrival rate λ and also can have think time Z, So what type of system can we apply LL for? what's the relationship between LL and open system and close system?

  • Neil Gunther

    Neil

    Neil Gunther

    Founder/Computer Scientist, Performance Dynamics

    LL can be applied very generally: including to both so-called "open" and "closed" queueing systems. Think of any computer system as being comprised of a set of nested boxes. You just need to specify which box you're considering and LL will hold locally in steady state. 

    An open queueing system can have an arbitrary value of λ, as long as it doesn't exceed the service rate; otherwise, the waiting line will become infinitely long. The mean arrival rate, λ, is a fixed or constant value, e.g., the average number of httpGets/second. 

    That can't happen in a closed system b/c there can only be a maximal number of possible requests (N) that can be in the system, e.g., N load generators, so it is self-throttling. Moreover, λ is no longer an arbitrary constant but determined by N and Z.

  • Alaister Boyd

    Alaister

    Alaister Boyd

    ITSM Service, Capacity Availability & Continuity manage

    The Gurilla Capacity Planning book by Neil is a great read of your getting into this seriously. I have used his methods and build models and tested them against real life observations and found them very close. This allowed me to use on new and developing systems using a smaller sample of generators against the system under test to allow good forecasting when the system would become saturated. The tests where build using Selinium controlled by Husdon and the sample observations put into a spreadsheet I developed based on Neil's techniques. Well with some mental gymnastics.

  • Yong Fu

    Yong Fu

    Performance Test Engineer at ENNIU

    Thank you. Actually I have been interesting in Neil's theory about two years ago, and they give me a lot of thinking on performance testing. But it seems I still missing guides on practice, I will planning to read his books seriously(Chinese can't have these books easily, you know).

  • Neil Gunther

    Neil

    Neil Gunther

    Founder/Computer Scientist, Performance Dynamics

    Thanks all for your endorsements: it's nice to know I didn't spend years writing books totally in vain. Unfortunately, however, books alone (even mine) can only take you so far.

    As Yong Fu asks implicitly: where's practical beef? Actually, it's there. But the starting point in my books is the data that you've already collected; that is important to you; that you know best. Clearly, I can't know that what that is, a priori. For data generation/collection, you need to understand how to sling the appropriate tools, e.g., LoadRunner, JMeter, etc., for your shop. That's a given. But that's only half the story. What's the other half?

    All performance data should be assessed within the context of a validation method. How else can you know when it's wrong? [I know, that never happens. (right)] The various performance models and laws in my books *are* the validation methods. This is the aspect that almost all performance engineers fail to fully appreciate [present company excepted. :) ]. The question then becomes, how do you connect your data with the appropriate validation method?

    The methods I discuss in my books are completely general and therefore guaranteed to be applicable to your data. I do give examples and war stories of how I made it work for me. Of course, those are not your data. So, making the connection is the trick.

    That's where my Guerrilla training classes come in. There, you get to ask questions of me directly and also tell me more about your particular circumstances so that I can figure out the connection for you. That's also a way that I learn new things. What emerges is that you don't need to understand *all* the performance modeling methods in my books, but only one or two. Once again, I can't know what they are, a priori.

    The other thing you may learn is that your data is not in the right form to be validated or the collection strategy is broken, etc. That's the most common problem b/c practitioners place far too much faith in the data collected by sophisticated (and often expensive) tools; just b/c 
    they're sophisticated and/or expensive. Nothing could be further from the truth. Once again, How else can you know that w/o a validation method?

    So, it's by virtue of this back and forth that you begin to see how all this can come together to meet your particular needs. I say this without hesitation b/c I've seen it happen a million times in my classes. Conversely, I'm sometimes astounded that something I regard as trivial turns out to be the most important thing to a particular student.http://www.perfdynamics.com/Classes/comments.html Otherwise, you can be doomed to muddle on for years.

  • Tom Shuttleworth

    Tom

    Tom Shuttleworth

    Service Delivery Manager at Proact IT UK

    For myself, as an absolute amateur at such things, this - "I'm sometimes astounded that something I regard as trivial turns out to be the most important thing to a particular student" struck a chord. Very simple things, like visualizing a computer as a queue is incredibly powerful. It gave me a structure to think about how I expect a system to perform. All of a sudden I could interpret iostat and defend my interpretation to people who knew far more about UNIX than I ever will. Knowing that performance doesn't scale linearly with utilization, even if sometimes it looks like it, is huge. 

    In my experience, remarkably few people working at the coal face of IT have any idea of this stuff and, in the scheme of things, at the level we are talking about here, it really isn't hard.

  • Leonid Grinshpan, Ph.D.

    Leonid

    Leonid Grinshpan, Ph.D.

    Practice Manager (North America): IT Performance at Tata Consultancy Services

    @Tom Shuttleworth

    The queues are major phenomenon defining application performance. In general any hardware and software resource that is needed to process initiated by a user transaction might be in short supply and transaction will wait in a queue. Than means interpretation of a distributed application by a queuing network is a powerful abstraction helping in application performance troubleshooting. Check the link http://tinyurl.com/m99enoh, it points at the article that introduces enterprise applications conceptual models uncovering performance related fundamentals. The value of conceptual models for performance analysis is demonstrated on two examples of virtualized and non-virtualized applications conceptual models.

  • Henry Steinhauer

    Henry

    Henry Steinhauer

    Systems Engineer-ESM ITM / Capacity Planning + Performance Management at Glacier Technologies, LLC

    - interesting phrase by Tom - 'people working at the coal face of IT' I agree that many times the Rule of Thumb that people have learned does not apply when you look at the system as a whole. 

    Also Neil states the applications are systems within systems. Understand the flow of work and how things are interconnected. Today with SOA and other abstraction taking place it is often hard to see those connections unless you can trap the calls made outside the application. Each of those calls are another chance to have delays in the application. They are entering a different queue for service. 

    That is what makes this profession so interesting after so many years of working with it. It is a murder mystery to be solved. Performance was killed - who done it.

  • Jens Olesen

    Jens

    Jens Olesen

    Director at SMT Data A/S

    N = X * R is a nice little (no pun intended) formula that I use quite often. 
    In order to benefit from it, you have to know: 
    N is "number in system", meaning the number of units of work queuing or being serviced in the system (not number of users). 
    X is "arrival rate", meaning the number of units of work entering the system per time unit (and this is where the number of users and their "think time" could have an impact). 
    R is "response time" (service plus queuing) expressed in the same time units as X. 
    And as others have already explained we're talking averages and assuming that units of work arrive at constant intervals and also ends within the observation period. 
    Despite all the assumptions, I have more than a few times used Little's Law to show that bad performance was caused by the application's inability to reach the desired degree of parallellism (and not HW bottlenecks, as most programmers instinctively assume). 
    Example: In a serial application N will always be 1, so try to increase X and see what happens to R (yes, it's logic but it helps you to understand how the formula works).

 

文章评论

编程语言是女人
编程语言是女人
如何区分一个程序员是“老手“还是“新手“?
如何区分一个程序员是“老手“还是“新手“?
10个调试和排错的小建议
10个调试和排错的小建议
程序员和编码员之间的区别
程序员和编码员之间的区别
那些性感的让人尖叫的程序员
那些性感的让人尖叫的程序员
为啥Android手机总会越用越慢?
为啥Android手机总会越用越慢?
程序员的鄙视链
程序员的鄙视链
程序员眼里IE浏览器是什么样的
程序员眼里IE浏览器是什么样的
60个开发者不容错过的免费资源库
60个开发者不容错过的免费资源库
十大编程算法助程序员走上高手之路
十大编程算法助程序员走上高手之路
聊聊HTTPS和SSL/TLS协议
聊聊HTTPS和SSL/TLS协议
Web开发人员为什么越来越懒了?
Web开发人员为什么越来越懒了?
做程序猿的老婆应该注意的一些事情
做程序猿的老婆应该注意的一些事情
团队中“技术大拿”并非越多越好
团队中“技术大拿”并非越多越好
10个帮程序员减压放松的网站
10个帮程序员减压放松的网站
老程序员的下场
老程序员的下场
2013年美国开发者薪资调查报告
2013年美国开发者薪资调查报告
科技史上最臭名昭著的13大罪犯
科技史上最臭名昭著的13大罪犯
我的丈夫是个程序员
我的丈夫是个程序员
旅行,写作,编程
旅行,写作,编程
Java程序员必看电影
Java程序员必看电影
每天工作4小时的程序员
每天工作4小时的程序员
程序员周末都喜欢做什么?
程序员周末都喜欢做什么?
 程序员的样子
程序员的样子
当下全球最炙手可热的八位少年创业者
当下全球最炙手可热的八位少年创业者
我是如何打败拖延症的
我是如何打败拖延症的
程序猿的崛起——Growth Hacker
程序猿的崛起——Growth Hacker
看13位CEO、创始人和高管如何提高工作效率
看13位CEO、创始人和高管如何提高工作效率
要嫁就嫁程序猿—钱多话少死的早
要嫁就嫁程序猿—钱多话少死的早
“肮脏的”IT工作排行榜
“肮脏的”IT工作排行榜
Web开发者需具备的8个好习惯
Web开发者需具备的8个好习惯
漫画:程序员的工作
漫画:程序员的工作
写给自己也写给你 自己到底该何去何从
写给自己也写给你 自己到底该何去何从
为什么程序员都是夜猫子
为什么程序员都是夜猫子
程序员必看的十大电影
程序员必看的十大电影
不懂技术不要对懂技术的人说这很容易实现
不懂技术不要对懂技术的人说这很容易实现
程序员的一天:一寸光阴一寸金
程序员的一天:一寸光阴一寸金
程序员都该阅读的书
程序员都该阅读的书
代码女神横空出世
代码女神横空出世
老美怎么看待阿里赴美上市
老美怎么看待阿里赴美上市
“懒”出效率是程序员的美德
“懒”出效率是程序员的美德
总结2014中国互联网十大段子
总结2014中国互联网十大段子
鲜为人知的编程真相
鲜为人知的编程真相
亲爱的项目经理,我恨你
亲爱的项目经理,我恨你
程序员最害怕的5件事 你中招了吗?
程序员最害怕的5件事 你中招了吗?
一个程序员的时间管理
一个程序员的时间管理
2013年中国软件开发者薪资调查报告
2013年中国软件开发者薪资调查报告
5款最佳正则表达式编辑调试器
5款最佳正则表达式编辑调试器
初级 vs 高级开发者 哪个性价比更高?
初级 vs 高级开发者 哪个性价比更高?
软件开发程序错误异常ExceptionCopyright © 2009-2015 MyException 版权所有