tag:blogger.com,1999:blog-15319370.post6177932933498947161..comments2024-11-04T06:03:42.791+01:00Comments on Roland Bouman's blog: Calculating the Nth percentile in MySQLrpboumanhttp://www.blogger.com/profile/13365137747952711328noreply@blogger.comBlogger30125tag:blogger.com,1999:blog-15319370.post-46876349639727859362013-12-13T22:38:33.129+01:002013-12-13T22:38:33.129+01:00Thanks for the detailed responses!Thanks for the detailed responses!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-79000466933043549092013-12-13T12:41:10.125+01:002013-12-13T12:41:10.125+01:00Hi!
whether the subquery is faster than the 2-ste...Hi!<br /><br />whether the subquery is faster than the 2-step solution is not so easy to say in this case. <br /><br />MySQL Subqueries often don't perform too good. Some of that is fixed in MySQL 5.7 and in MariaDB. And just like with all the other solutions that relyu on GROUP CONCAT it needs the memory to build the list of strings.<br /><br />However, the 2-step solution with LIMIT has it's own problems (and these even hold for the first solution since that uses LIMIT too). Even though the LIMIT selects only one result row, it still needs to build a temporary table where it stores the sorted result at least up to the row of interest. Since that row is at the very tail of the entire set, this is not trivial and takes time as well.<br /><br />I can't really make a general statement at this point, you're going to have to try and test with your data to see what performs best. That said, if you don't need the data instantly and you have the time to caclulate the result before hand, then I guess it shouldn't matter too much.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-63095607296072312772013-12-12T21:13:57.322+01:002013-12-12T21:13:57.322+01:00Oh yeah... One more question, doing the subquery (...Oh yeah... One more question, doing the subquery (your first example) is faster than doing two calls, correct?<br /><br />I'm programming in php using mysqli object.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-49358443244698357442013-12-12T21:09:39.098+01:002013-12-12T21:09:39.098+01:00Thanks for the incredibly detailed reply!
I shou...Thanks for the incredibly detailed reply! <br /><br />I should have thought to use an embedded select, but I'm not strong in my SQL.<br /><br />My application is not mission critical, as I'm just trying to get the Xth percentiles of certain data points on what people think is funny. I was grabbing anything that was better than AVG(), but its too much. I need to narrow down to a high percentile.<br /><br />If I get a chance I may try both ways (using all the data and just subsets) to see how things change. Maybe one can pull funnier stuff out (but "funny" is subjective, so no REAL way to tell which is better).<br /><br />Thanks again!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-16916865986146054372013-12-12T01:36:16.390+01:002013-12-12T01:36:16.390+01:00Hi Terry!
The group concat function does not supp...Hi Terry!<br /><br />The group concat function does not support a LIMIT clause (syntax here http://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html#function_group-concat) so I don't think your example could work.<br />What you can do is use LIMIT at the end of the SELECT expression, and then apply the aggregation. <br />However, LIMIT is applied at the very end of the operation, so you have to do it in 2 steps (otherwise, the LIMIT will be applied after the aggregation, which is not what you want)<br /><br />Something like this:<br /><br />SELECT ...GROUP_CONCAT(...)<br />FROM (<br /> SELECT ... <br /> FROM myTable<br /> ORDER BY myDate DESC<br /> LIMIT 0, 100<br />)<br /><br />...and then this will caclulate it over the sample of the last (that's because of the DESC, with ASC it would have been the first) 100 values.<br /><br />While the idea of using only a sample of the values to caculate the percentile is interesting, this particular approach has quite a bunch of problems. There's just no way to know whether the sample you're drawing is a good representation of your entire dataset. <br /><br />There are several other approaches but each has their own problems. One popular but potentially unreliable method is to use user-defined variables. A clean, but slow approach is to either use a self join or subqueries.<br /><br />It should be possible to use LIMIT to your advantage if it is acceptable to do the task in 2 steps. <br />Step 1 would be to calculate the ordinal of the desired percentile value<br />Step 2 would be to use LIMIT to actually fetch that value. <br /><br />Example: <br /><br />mysql> select count(amount)*.90 from sakila.payment ;<br />+-------------------+<br />| count(amount)*.90 |<br />+-------------------+<br />| 14444.10 |<br />+-------------------+<br />1 row in set (0.01 sec)<br /><br />mysql> select amount from sakila.payment order by amount limit 14444, 1;<br />+--------+<br />| amount |<br />+--------+<br />| 6.99 |<br />+--------+<br />1 row in set (0.02 sec)<br /><br />HTH, <br /><br />Roland.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-19103139740786158072013-12-11T23:06:58.301+01:002013-12-11T23:06:58.301+01:00I'm having problems with the GROUP_CONCAT limi...I'm having problems with the GROUP_CONCAT limit and don't have access to change config files on the server.<br /><br />However I have a field in the database for the date each data point was inserted.<br /><br />Is there a way to order the data by the date and then only use the most recent 100 data points?<br /><br />I'm not stong in my SQL but this is what I think I'd want if it were syntactically correct:<br /><br />SELECT SUBSTRING_INDEX(<br /> SUBSTRING_INDEX(<br /> GROUP_CONCAT( <br /> f.length<br /> ORDER BY f.length, f.createdAt LIMIT 0,100<br /> SEPARATOR ','<br /> )<br /> , ',' <br /> , 90/100 * COUNT(*) + 1 <br /> )<br /> , ',' <br /> , -1 <br /> ) AS `90th Percentile`<br />FROM myTable AS f Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-57337185640745380852012-02-09T12:53:06.754+01:002012-02-09T12:53:06.754+01:00Dear @Anonymous,
Thanks for your comment!
First...Dear @Anonymous, <br /><br />Thanks for your comment!<br /><br />First, I should point out that there is absolutely nothing wrong with the given definition of the median as the "number of observations with a higher value is equal to the number of observations that has a lower value."<br /><br />What is wrong is my example, or at the very least it is confusing. I wrote a plus there to indicate that both 2 and 3 are the middle values (if we disregard the two middle values 2 and 3 themselves). That was silly and confusing. The final figure, 2.5, is the mean of 2 and 3 which is normally accepted as the "right" approximation of the median in case there is an even number of observations. But as an example, this is really confusing since the mean of the entire series is also 2.5. And indeed, typically the arithmetic mean is used as the best measure for the average.<br /><br />I will update the text to more clearly describe what I meant.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-73117028707980227062012-02-09T10:45:01.528+01:002012-02-09T10:45:01.528+01:00what you define as median is the average in fact:
...what you define as median is the average in fact:<br /><br /> "the number of observations with a higher value is equal to the number of observations that has a lower value. For the series {1,2,3,4} the median would be 2+3 = 2.5."<br /><br />the median would be 2 or 3, the difference between median and avarage is that the median is included in the list of values, the one that has 50% of values being lower and 50% higher.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-30251921347507672522011-10-13T05:55:31.564+02:002011-10-13T05:55:31.564+02:00Thanks! this helped! Ilya.Thanks! this helped! Ilya.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-82370637279065181782009-12-14T15:53:58.991+01:002009-12-14T15:53:58.991+01:00Roland, thanks a lot for this.
I did try some sim...Roland, thanks a lot for this. <br />I did try some simpler solutions, like ordering the dataset and limiting at the 90th percentile value, but unfortunately, passing variables to mysql LIMIT CLAUSE does not work...even now, after 4 years of people complaining about that. <br />I tried preparing the statements, and executing them, but as much as it worked in a mysql editor, it did not work in my Jasper Report, who totally refused executing a prepared statement.<br /><br />This is how i got back to your solution, adapted it, and used it for doing the 90th percentile on a big set of response times<br /><br />For those of you still looking for a customized query,working on numbers, here is the script i have used:<br />SELECT <br />cast(substring_index(substring_index((group_concat(cast(t as char) order by t) ),',',(round(0.9*count(t)+1))),',',-1) as signed int) as 90perc from testresults where testrun_id=2 group by http_request<br /><br />In this case, t is the value of the response time, http_request is the request in the table, and testrun_id is just the id of the testcase i have run. <br /><br />I do hope it will help. And it will of course, only help if used together with modifying the group_concat system variable limit.<br /><br />Thanks again! <br />AlexAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-38319953255011347602009-12-02T21:58:37.092+01:002009-12-02T21:58:37.092+01:00Dear anonymous,
this is not a bug. In the very po...Dear anonymous,<br /><br />this is not a bug. In the very post you are commenting on, this is mentioned beneath the header max_group_concat_len. <br /><br />You seem to be the type that doesn't want to waste too much time on reading, So I will post the fix right here. <br /><br />Simply set the max_group_concat_len server variable to the practical maximum value:<br /><br />SET @@group_concat_max_len := @@max_allowed_packet;<br /><br />You can also set this globally in your mysqld option file.<br /><br />kind regards,<br /><br />Rolandrpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-26734928885914666002009-12-02T21:53:59.928+01:002009-12-02T21:53:59.928+01:00There is a bug with this solution when the concate...There is a bug with this solution when the concatenated string in the group_concat function is larger then 1024 bytes.<br /><br />This string gets truncated.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-6789805644309134932008-07-22T00:16:00.000+02:002008-07-22T00:16:00.000+02:00Gody, thanks! (focussing on the user variable prob...Gody, thanks! <BR/><BR/>(focussing on the user variable problem)<BR/><BR/>Yeah it is true - most of the user variable solutions I posted aren't really safe. I kind of got lost doing them, forgetting to properly test the results. <BR/><BR/>The GROUP_CONCAT solution is safe though, as there is explicit ordering there (independent of an index) and no user variables.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-10951700710288215472008-07-21T22:56:00.000+02:002008-07-21T22:56:00.000+02:00I stumbled across your blog when I wanted to find ...I stumbled across your blog when I wanted to find ideas for good queries for calculating quantiles in a (very) big dataset.<BR/><BR/>Maybe I can help on some issues (or not) and ask a question:<BR/><BR/>For some different definitions/calculations of quantiles (which include percentiles) see f.e.<BR/>http://mathworld.wolfram.com/Quantile.html (excellent site in general including references ) <BR/><BR/>I believe Q1 and Q5 mentioned there are most commonly used.<BR/>(Q5 should give the more common median defintion which may help resolve some issues for users should you implement it).<BR/><BR/>Regarding the median: For large draws from a distribtution<BR/>(whose quantiles you want to compute) the competing definitions you mention (and for Qx) will not matter since the values converge for large N. Again (see above)I believe the more common estimator is X(ceil(N/2)) for odd and<BR/>(X(floor(N/2)) + X(ceil(N/2)))/2 for even N.<BR/><BR/>Regarding the issue of equal ranks:<BR/>Since strictly speaking I believe quantiles need unique orders (no equal elements) the most common procedure -I think- is to pretend they were.<BR/>So rank 1 2 3 instead of 1 2 2<BR/>(2 and 3 being equal) to calculate the quantiles. <BR/><BR/>Regarding your (RB) solutions:<BR/>Is it possible for your queries with user-defined variables to run into problems when mysql does not process the numbers in the right order? e.g. if part of a more complicated query including joins<BR/>(e.g. as described in<BR/>http://dev.mysql.com/doc/refman/5.0/en/user-variables.html)<BR/>or when randomly ordered on disk (e.g. no index)<BR/><BR/>GodyAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-62845899307950648742008-07-08T18:58:00.000+02:002008-07-08T18:58:00.000+02:00Vladimi, sure, see:lib_mysqludf_stathttp://www.mys...Vladimi, <BR/><BR/>sure, see:<BR/><BR/>lib_mysqludf_stat<BR/><BR/>http://www.mysqludf.org/lib_mysqludf_stat/index.phprpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-35497942931447714562008-07-08T17:30:00.000+02:002008-07-08T17:30:00.000+02:00Roland,I think it's a great idea. Btw, is there a ...Roland,<BR/><BR/>I think it's a great idea. Btw, is there a suitable UDF-package where it could be included?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-86954083898580460402008-07-07T23:50:00.000+02:002008-07-07T23:50:00.000+02:00Hi Vladimir, great you got it to work! Maybe we ca...Hi Vladimir, great you got it to work! <BR/><BR/>Maybe we can work together and create UDFs for this too? Just for the fun and of course to find out if a UDF can beat user variables?<BR/><BR/>Just let me know and we'll make it happen.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-33301506450583312212008-07-07T22:30:00.000+02:002008-07-07T22:30:00.000+02:00Roland,the following query worked for me and showe...Roland,<BR/>the following query worked for me and showed pretty good performance:<BR/><BR/>mysql> set @cnt1 := @cnt2 := 0; select c, p from (SELECT c, (@cnt1:= @cnt1 + 0 + COUNT(*)) p FROM tt GROUP BY c) x where p > (select count(*) from tt)*0.9 limit 1;<BR/>Query OK, 0 rows affected (0.00 sec)<BR/><BR/>+-------+---------+<BR/>| c | p |<BR/>+-------+---------+<BR/>| 90270 | 3600026 |<BR/>+-------+---------+<BR/>1 row in set (3.06 sec)<BR/><BR/>table size: 4M, distinct value count: 100803<BR/><BR/>the division didn't work properly, also I had to use sub-select as checking p in HAVING re-evaluated p and gave incorrect output.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-41479451733835769532008-07-07T21:05:00.000+02:002008-07-07T21:05:00.000+02:00Roland,the idea with user variables is excellent e...Roland,<BR/><BR/>the idea with user variables is excellent except it didn't work for me "as is" (I even updated the server to the 5.1.25-community). The form<BR/><BR/>@cum_cnt:=@cum_cnt + COUNT(*)<BR/><BR/>worked for me simply as COUNT(*) which was surprising as it obviously worked for you. I quickly surfed through the variables and SQL modes manual sections, but didn't find anything relevant. Anyway, after some time "hands-on keyboard" I found the way to make it work. Here's the log:<BR/><BR/>mysql> show create table a \G<BR/>*************************** 1. row ****<BR/> Table: a<BR/>Create Table: CREATE TABLE `a` (<BR/> `c` int(11) DEFAULT NULL<BR/>) ENGINE=MyISAM DEFAULT CHARSET=latin1<BR/>1 row in set (0.00 sec)<BR/><BR/>mysql> select * from a;<BR/>+------+<BR/>| c |<BR/>+------+<BR/>| 1 |<BR/>| 2 |<BR/>| 2 |<BR/>| 2 |<BR/>| 3 |<BR/>| 4 |<BR/>| 4 |<BR/>+------+<BR/>7 rows in set (0.00 sec)<BR/><BR/>mysql> set @num1 := @num2 := @num3 := 0; select count(*), @num1 := (@num1 + count(*)) c1, @num2 := (@num2 + count(*) + 0) c2, @num3 := (@num3 + 0 + cou<BR/>nt(*)) c3 from a group by c;<BR/>Query OK, 0 rows affected (0.00 sec)<BR/><BR/>+----------+------+------+------+<BR/>| count(*) | c1 | c2 | c3 |<BR/>+----------+------+------+------+<BR/>| 1 | 1 | 1 | 1 |<BR/>| 3 | 3 | 3 | 4 |<BR/>| 1 | 1 | 1 | 5 |<BR/>| 2 | 2 | 2 | 7 |<BR/>+----------+------+------+------+<BR/>4 rows in set (0.00 sec)<BR/><BR/>As you can see it works only in one specific form (c3). I will try to do the benchmark today...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-12864722318432588132008-07-07T13:31:00.000+02:002008-07-07T13:31:00.000+02:00Vladimir, your idea concerning the "accumulative" ...Vladimir, <BR/><BR/>your idea concerning the "accumulative" or "cumulative" aggregate functions is pretty interesting! <BR/><BR/>It's probably pretty trivial to create UDFs for these to prove the concept. I am quite sure your hunch that they will be pretty fast is a good one - I expect this to be the fastest solution, provided the group by can be executed efficiently.<BR/><BR/>Note that you can already do this trick with user variables:<BR/><BR/>-- allow batching <BR/>DELIMITER go statements<BR/><BR/>-- initialize<BR/>SELECT COUNT(*), 0<BR/>INTO @cnt, @cum_cnt<BR/>FROM sakila.film;<BR/><BR/>-- calculate percentile<BR/>SELECT length<BR/>, @cum_cnt:=@cum_cnt + COUNT(*)/@cnt as p<BR/>FROM film<BR/>GROUP BY length<BR/>HAVING p >= 0.9<BR/>LIMIT 1;<BR/><BR/>go<BR/><BR/>Can you repeat your benchmark with this trick and see what it gives? Thanks in advance,<BR/><BR/>Roland.rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-55498373347364429322008-07-07T13:00:00.000+02:002008-07-07T13:00:00.000+02:00Roland,what I meant by "typical value" is exactly ...Roland,<BR/><BR/>what I meant by "typical value" is exactly what is described in the wikipedia article about the median http://en.wikipedia.org/wiki/Median . In the example with poormen and a richman in the room the "typical" value "on the table" is $5. I apologize if my explanations were not very clear...<BR/><BR/>About the INCREMENTAL thing: (again my apologies I was in a hurry, and also made a mistake in the pseudocode) the example would be as follows (I will imply the film table in the example):<BR/><BR/>SELECT COUNT(INC length), length FROM film GROUP BY length<BR/><BR/>here COUNT(INC length) would be the same as normal COUNT(length) except it wouldnt reset the function value after every group, for example if we have length values 1,1,2,3,4,4,4,5 then the query above would give us <BR/><BR/>COUNT(INC len) len<BR/> 2 1<BR/> 3 2<BR/> 4 3<BR/> 7 4<BR/> 8 5<BR/><BR/>while the normal <BR/><BR/>SELECT COUNT(length), length FROM film GROUP BY length<BR/><BR/>would give<BR/><BR/>COUNT(len) len<BR/> 2 1<BR/> 1 2<BR/> 1 3<BR/> 3 4<BR/> 1 5<BR/><BR/>the result above as you understand can be reached by a fairly trivial modification to the aggregate function code. Now having the cumulative result above we can continue our calculation:<BR/><BR/>SELECT COUNT(INC length)/(SELECT COUNT(*) FROM film) p, length FROM film GROUP BY length<BR/><BR/>this query will return us all distinct "length" values from the table along with their percentile values. And finally we filter out what we [dont] need by e.g. adding<BR/><BR/>HAVING p >= 0.9 ORDER BY p LIMIT 1 <BR/><BR/>Regards,<BR/>VladimirAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-54623720907678116392008-07-07T12:22:00.000+02:002008-07-07T12:22:00.000+02:00Hi Vladimir, thanks for coming back and taking the...Hi Vladimir, <BR/><BR/>thanks for coming back and taking the time to think about this. <BR/><BR/>"what I still remember from university times is that the median intuitively is the most "typical" value (or values) in the distribution. Just out of curiosity I would appreciate if you point me to some books/applications of this "mean" median."<BR/><BR/>Well, there are three 'similar but different' metrics to characterize the 'typical' value for a normal distribution:<BR/>* mean: sum(value)/count(*)<BR/>* mode: most abundant value<BR/>* median: 'middle' value<BR/><BR/>If you google for "statistical median" (don't use the quotes) you will get numerous articles, and all I have seen first explain that it is the middle value and that the mean of the two middle values is taken in case there is an even number of observations. I am sure you will find proper references to text books too if you look hard enough. (I've looked into a number of my Dutch statistics texbooks, and all mention the 'mean' trick, some also mention the 50th percentile. It just proves statistics is not that exact I guess.)<BR/><BR/>Concerning you alternative solution: This is great stuff, Thank you! <BR/>Clearly your solution performs much nicer for larger sets.<BR/><BR/>Concerning your INCREMENTAL proposal:<BR/>Maybe I don't understand completely. The statement you gave:<BR/><BR/>SELECT ... SUM(INCREMENTAL length)/(SELECT COUNT(*)) ... GROUP BY length ...<BR/><BR/>is not entirely complete so maybe something got lost in translation. What are you calculating exactly here? Where is the COUNT(*) coming from? <BR/><BR/>kind regards,rpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.comtag:blogger.com,1999:blog-15319370.post-18926257840338659632008-07-07T10:03:00.000+02:002008-07-07T10:03:00.000+02:00Roland,OK, not putting under a question the validi...Roland,<BR/><BR/>OK, not putting under a question the validity of your definition anymore I'm still fairly surprised, as what I still remember from university times is that the median intuitively is the most "typical" value (or values) in the distribution. Just out of curiosity I would appreciate if you point me to some books/applications of this "mean" median.<BR/><BR/>The calculation:<BR/>Of course your join examples will be very slow as you over join all the rows. However this is not necessary. What we need is the number of values that are "below" the given. So instead of using the whole table as the join predicate we can rather use something like:<BR/><BR/>(select count(*) c, length l from film group by l [order by l])<BR/><BR/>If the number of distinct values in the table is relatively small this will work fast regardless of the total number of rows in the table.<BR/><BR/>I've got the following query that worked for me:<BR/><BR/>select sql_no_cache<BR/>sum(g1.r) sr, g2.length l, sum(g1.r)/(select count(*) from film) p<BR/>from<BR/>(select count(*) r, length from film group by length) g1<BR/>join<BR/>(select count(*) r, length from film group by length) g2<BR/>on g1.length < g2.length<BR/>group by g2.length<BR/>having p > 0.9<BR/>order by p<BR/>limit 1<BR/><BR/>(of course it can be optimized to use temp tables; we might also play with "WITH ROLLUP" instead of select count(*) for the whole table)<BR/><BR/>I have created a MyISAM table and tried this query and the group_concat one (spec: my quite old notebook PM-2000, 2Gb, 4500rpm, sql_no_cache, default memory settings, except increased concat buffer to 14M, every query run 3 times, win xp):<BR/><BR/>4M int rows, 999 distinct values:<BR/><BR/>group_concat: 1 min 6.23 sec<BR/>group_concat: 1 min 6.31 sec<BR/>group_concat: 1 min 6.48 sec<BR/>groupby-join: 5.36 sec<BR/>groupby-join: 5.28 sec<BR/>groupby-join: 5.31 sec<BR/><BR/>1M int rows, 999 distinct values:<BR/><BR/>group_concat: 5.31 sec<BR/>group_concat: 5.30 sec<BR/>group_concat: 5.31 sec<BR/>groupby-join: 2.47 sec<BR/>groupby-join: 2.52 sec<BR/>groupby-join: 2.47 sec<BR/><BR/>100K int rows, 999 distinct values:<BR/><BR/>group_concat: 0.56 sec<BR/>group_concat: 0.56 sec<BR/>group_concat: 0.56 sec<BR/>groupby-join: 1.63 sec<BR/>groupby-join: 1.61 sec<BR/>groupby-join: 1.63 sec<BR/><BR/>I must admit that when N of distinct rows reaches approx. 10K I get pretty the opposite results if the total number of rows is relatively small. Basically we get into the same situation as with joining the whole tables.<BR/><BR/>But what I think is the right solution is having something like this on the server side:<BR/><BR/>SELECT ... SUM(INCREMENTAL length)/(SELECT COUNT(*)) ... GROUP BY length ...<BR/><BR/>I.e. a flag that tells the server to not reset per-group counters in the aggregate functions. This would be quite a trivial change in the Item_sum class and would make sense not only for SUM, but maybe also for MIN/MAX, AVG, COUNT and maybe some other aggregate functions. In this case the query would be very simple and fast and not consume any extra memory. (Maybe I will have time to write such a patch :) )<BR/><BR/>Regards,<BR/>VladimirAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-56713250763499831602008-07-06T13:34:00.000+02:002008-07-06T13:34:00.000+02:00Great ideaPavelGreat idea<BR/><BR/>PavelAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-15319370.post-67400138224185766802008-07-04T20:36:00.000+02:002008-07-04T20:36:00.000+02:00Vladimir, thanks for your reply.Regarding the medi...Vladimir, <BR/><BR/>thanks for your reply.<BR/><BR/>Regarding the median definition: there are a number of definitions. I know wikipedia is not authoritative, but what I read there matches exactly what I find in several statistics textbooks:<BR/><EM><BR/>In probability theory and statistics, a median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one. If there is an even number of observations, the median is not unique, so one often takes the mean of the two middle values. At most half the population have values less than the median and at most half have values greater than the median. If both groups contain less than half the population, then some of the population is exactly equal to the median. For example, if a < b < c, then the median of the list {a, b, c} is b, and if a < b < c < d, then the median of the list {a, b, c, d} is the mean of b and c, i.e. it is (b + c)/2.<BR/></EM><BR/>(from: <A HREF="http://en.wikipedia.org/wiki/Median" REL="nofollow">wikipedia</A>)<BR/><BR/>I believe the 'trick' with taking the mean is sometimes called the 'financial median' and I am sure it is know in other terms as well.<BR/><BR/>It maybe so that some sources define the median as a special case of a percentile (or some other <A HREF="http://en.wikipedia.org/wiki/Quantile" REL="nofollow">quantile</A>) However, which way you turn it, there will always be a problem whenever a single value appears multiple times and does not allow a clean cut along the quantile boundary. <BR/><BR/>Your remark that the median must be taken from the distribution is just yet another notion of the concept median, but the mean is just as accepted, although perhaps not for the same applications.<BR/><BR/>Your observation that all values are concatenated is correct. I agree that it may seem inefficient, and I am sure it is when the dataset grows very large. However, for this particular example of 2000 films (sakila database),<BR/><BR/>I get:<BR/><BR/>my solution: returns immediately with 173<BR/><BR/>A solution with a JOIN:<BR/><BR/>SELECT DISTINCT p.length<BR/>FROM film p<BR/>INNER JOIN film v<BR/>ON p.length > v.length<BR/>GROUP BY p.film_id<BR/>HAVING CAST(<BR/> COUNT(*) / (<BR/> SELECT COUNT (*) <BR/> FROM film<BR/> ) <BR/> AS DECIMAL(3,2)<BR/> ) = 0.90<BR/><BR/>takes about 0.50 sec, and returns 174<BR/><BR/>A solution with a subquery:<BR/><BR/>SELECT DISTINCT length<BR/>FROM film f<BR/>WHERE .90 =<BR/> CAST(<BR/> (SELECT COUNT(*) <BR/> FROM film p <BR/> WHERE p.length < f.length<BR/> )<BR/> / (SELECT COUNT(*) FROM film) <BR/> AS DECIMAL(3,2)<BR/> )<BR/><BR/>takes 0.13 seconds, also returning 174.<BR/><BR/>So for this dataset, my method is by far the fastest. I tried also on a larger dataset of payments, calulating the 90th percentile of the amount. (16000 something rows). <BR/><BR/>There, my method returns in 0.09s or less, and the join and subquery solution just don't seem to want to return anytime soon - i killed them.<BR/><BR/>I would be very interested in hearing alternative solutions though. I mean, I don't claim my method is elegant or whatever - I would much rather use something that is not dependent upon group_concat_max_len, so if you know something, please post here and let us all know.<BR/><BR/>Thanks, and kind regards,<BR/><BR/>Rolandrpboumanhttps://www.blogger.com/profile/13365137747952711328noreply@blogger.com