Interactive – why is this the best governor? [INFO] - UPDATED 2/23 - T-Mobile Samsung Galaxy Note 3

Many people have tried this governor out, but few have tested it in the 3 most important categories that should be considered when deciding which governor is the best for you:
=====================
UPDATED 2/23 - I have been getting a lot of PMs about what settings I specifically use for for the interactive governor and the deadline IO scheduler. See the end of this post for those settings.
=====================
Performance
Battery Life
CPU times
First, let’s talk about the MAIN reason this governor stands out above the default “ondemand” for probably most if not all Android devices….. TIME. Yes, time. The main difference between the two governors is that interactive operates more on timed values instead of actually calculating load and trying to compensate for the ever changing size and number of operations being asked of the CPU to execute. Always adjusting based upon load, determining that load, and compensating for it costs you two things – TIME (again this word is used, but in the dynamic of the ondemand governor, I reference it in a negative way – a cost or penalty) and latency. Ondemand uses up and down threshold values to figure out how to bring the CPU out of idle or back down to idle, etc. A predetermined “tick” of 50,000 ms to check what kind of load is being placed on the CPU. This is done going up AND going down.
Interactive uses timers to adjust. There is a “tick” much like ondemand, but it is significantly shorter and the way it responds is a bit more clever. Rather than try to play cat and mouse with your operation requests, it uses timers to more aggressively respond and handle a work queue. From android.googlesource.com:
Code:
“The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
usage, similar to "ondemand" and "conservative" governors. However,
the governor is more aggressive about scaling the CPU speed up in
response to CPU-intensive activity."
Now let’s look at the brains of the governor:
Code:
[U]min_sample_time:[/U] The minimum amount of time to spend at the current
frequency before ramping down. This is to ensure that the governor has
seen enough historic cpu load data to determine the appropriate
workload. Default is 80000 uS.
Code:
[U]hispeed_freq:[/U] An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
then speed may be bumped higher. Default is maximum speed.
Code:
[U]go_hispeed_load:[/U] The CPU load at which to ramp to the intermediate "hi
speed". Default is 85%.
Code:
[U]above_hispeed_delay:[/U] Once speed is set to hispeed_freq, wait for this
long before bumping speed higher in response to continued high load.
Default is 20000 uS.
Code:
[U]timer_rate:[/U] Sample rate for reevaluating cpu load when the system is
not idle. Default is 20000 uS.
And here we even have the option to use hispeed_freq as a touch module:
Code:
[U]input_boost:[/U] If non-zero, boost speed of all CPUs to hispeed_freq on
touchscreen activity. Default is 0.
What you can clearly see here is the distinction of making operations time sensitive instead of load sensitive. Rather brilliant. What this does is removes the “calculation” factor from the equation and allows the user to completely take control of the device in terms of (remember those 3) performance, battery life, CPU times! The last two are closely related and could be thought of as one.
Studying these values, and testing them individually myself, I have found that the trick is to set the hispeed_freq to approximately 70-75% of the CPU’s STOCK speed. So for a note 3, right around 1.72 GHz (if you do not use the touch boost module). And what we want to do is make this parameter extremely aggressive by inputting a value right around 60-65 for go_hispeed_load.
What this does is give you a very snappy response mechanism for medium intensive tasks without spinning the CPU up unnecessarily. Most things you do on a day to day basis will not require the processor to be ramped to max. Ondemand, however, shows behavior patterns in CPU times that are FAR more aggressive – the reason is the latency issue with ondemand (playing cat and mouse as I stated earlier) and not being as capable of keeping up with your ever changing dynamic work queues.
I would rather not write a novel, all of this just rolled off the keys just now because I was bored, but any questions about comparisons between the two governors, please feel free to post here and I will do my best to reply with something useful to you. For a reference, here are my personal settings for this governor:
My personal settings for interactive:
above_hispeed_delay: 15000
boost: 0
boostpulse_duration: 60000
go_hispeed_load: 70
hispeed_freq: 1728000
io_is_busy: 0
min_sampling_time: 60000
target_loads: 90
timer_rate: 15000
timer_slack: 60000
My personal settings for deadline:
add_random: 1
iostats: 1
nomerges: 0
rotational: 0
rq_affinity: 1
fifo_batch: 4
front_merges: 1
read_expire: 500
write_expire: 3000
writes_starved: 3

Thanks for shedding light in this cun7. Very helpful.
Sent from my SM-N900T using Tapatalk

Dropping dem knowledge bombs business coon

Allot of useful information there cun7. Thank you Good sir.

Great insight... Thank you Bbro
Sent from my SPH-L720 using Tapatalk

It would also be worth mentioning, that in my own personal testing, interactive spends less time overall in higher frequency steps, and the result in the UI is still the same - very fluid with no latency or stuttering. There is so much power wasted when using the ondemand governor. It just isn't as responsive or fine tuned when it comes to your interaction with the device.
Interactive performs when it needs to, and settles down when it doesn't. But more importantly it does this more accurately/appropriately than any other. Why it isn't the default setting on mobile platforms is beyond my understanding. ondemand is "tried and true", I get that, but it just isn't efficient. There are better ways to skin a cat.

Thanks For this! What's your insight regarding Schedulers and it's settings?
Sent from my SM-N900T using XDA Premium 4 mobile app

Vinsane said:
Thanks For this! What's your insight regarding Schedulers and it's settings?
Sent from my SM-N900T using XDA Premium 4 mobile app
Click to expand...
Click to collapse
On these devices, noop or deadline. Nothing else performs as optimally or logical for a device that does not having a "moving mechanism" disk for IO.
ROW seems to be one that I have seen popping into some kernel builds as of the last year or so. Why? I honestly don't know. This IO scheduler is terrible for a device that does read AND write operations regularly. Giving one a higher priority makes absolutely no sense. Performance suffers under heavy load where you have smaller read operations being given higher priority on the disk over a large write operations. As with anything, there needs to be balance, not just a one size fits only 1 when there are two more needing to be fitted, make sense?
Deadline in my opinion is the best overall because it makes an attempt to prioritize which operations get served first based upon their calculated "need", and services them accordingly - again here we see the time being the carefully calculated variable and not some other factor. deadline says "hey do this one over here, it is going to be the most taxing on CPU and is therefore more important to the user experience"...
noop is a solid one as well, and much simpler. It processes any and all tasks AS THEY COME IN. Meaning, as an operation is requested, so is it done. This scheduler also has a very unique attribute over deadline because noop can merge requests together and run them through the pipe at the same time. The only drawback to noop is in fact it's merging - performance can suffer when there is a lot going on because there is no priority given to this over that.
My experience has been that you cannot go wrong with either, but I personally prefer deadline and in some instances I have actually seen noop's weakness rear it's head through my interaction with the device, then switched to deadline and seen that scheduler handle that same set of tasks with less hiccups.
noop is also slighty (and by slightly I mean theoretically because I have never seen the difference in battery represented at the end of the day in my battery life) easier on the battery because of the merging function. It requires less CPU to process a merged task than it would two separate tasks. Theory of course...
I use deadline pretty religiously myself, and performance is what I am seeking with it. All the other hybrid frankenstein IO schedulers out there simply don't square up - sio, vr, bfq, cfq, fifo, lifo row... They all have some weird parameter that simply just lacks the logic to apply to flash memory applications, or android, etc.
Deadline is simple, straight forward, and extremely effective. That is all.

cun7 said:
On these devices, noop or deadline. Nothing else performs as optimally or logical for a device that does not having a "moving mechanism" disk for IO.
ROW seems to be one that I have seen popping into some kernel builds as of the last year or so. Why? I honestly don't know. This IO scheduler is terrible for a device that does read AND write operations regularly. Giving one a higher priority makes absolutely no sense. Performance suffers under heavy load where you have smaller read operations being given higher priority on the disk over a large write operations. As with anything, there needs to be balance, not just a one size fits only 1 when there are two more needing to be fitted, make sense?
Deadline in my opinion is the best overall because it makes an attempt to prioritize which operations get served first based upon their calculated "need", and services them accordingly - again here we see the time being the carefully calculated variable and not some other factor. deadline says "hey do this one over here, it is going to be the most taxing on CPU and is therefore more important to the user experience"...
noop is a solid one as well, and much simpler. It processes any and all tasks AS THEY COME IN. Meaning, as an operation is requested, so is it done. This scheduler also has a very unique attribute over deadline because noop can merge requests together and run them through the pipe at the same time. The only drawback to noop is in fact it's merging - performance can suffer when there is a lot going on because there is no priority given to this over that.
My experience has been that you cannot go wrong with either, but I personally prefer deadline and in some instances I have actually seen noop's weakness rear it's head through my interaction with the device, then switched to deadline and seen that scheduler handle that same set of tasks with less hiccups.
noop is also slighty (and by slightly I mean theoretically because I have never seen the difference in battery represented at the end of the day in my battery life) easier on the battery because of the merging function. It requires less CPU to process a merged task than it would two separate tasks. Theory of course...
I use deadline pretty religiously myself, and performance is what I am seeking with it. All the other hybrid frankenstein IO schedulers out there simply don't square up - sio, vr, bfq, cfq, fifo, lifo row... They all have some weird parameter that simply just lacks the logic to apply to flash memory applications, or android, etc.
Deadline is simple, straight forward, and extremely effective. That is all.
Click to expand...
Click to collapse
Great explanation. I am surprised you don't like sio though. Its basically noop when it comes to fcfs and request merging but adds deadlines for fairness helping cut down on those times where noop can be detrimental.

themichael said:
Great explanation. I am surprised you don't like sio though. Its basically noop when it comes to fcfs and request merging but adds deadlines for fairness helping cut down on those times where noop can be detrimental.
Click to expand...
Click to collapse
Actually, sio isn't bad. It is just kind of one of those frankenstein IO schedulers in my opinion. In this scheduler we have start service requests (a deadline function) but with no sorting of requests.... it's like noop with a timer, kind of a redundant scheduler because was is the point of setting a deadline on a task if the scheduler is still using a FCFS logic? You still can suffer on large operations because the scheduler has no ability to organize, it can only "rush from A to B" for lack of a better way to describe it. Analyze/organize/timer, execute, check timer, rinse and repeat... and none of those really have to be in that order for the deadline scheduler to be efficient!
sio is basically deadline without the ability to sort ques. Although I would put it equal to noop.
Also, deadline does have the ability to merge small sorted requests. It's just like... a 20 trick pony. That's why I use it pretty much exclusively.
When I was working on the Gingerbread project for the Nexus S, myself and 3 of my colleagues had a short discussion over some subway about which IO scheduler to set as default in the Nexus S kernel. Deadline made the most sense for Android at the time. It still does in my opinion, nothing has changed since then in terms of what else is available.
SIO is not bad, however. I would even say use it over noop, but you won't really see any real performance increase from it. The two are still =

cun7 said:
Actually, sio isn't bad. It is just kind of one of those frankenstein IO schedulers in my opinion. In this scheduler we have start service requests (a deadline function) but with no sorting of requests.... it's like noop with a timer, kind of a redundant scheduler because was is the point of setting a deadline on a task if the scheduler is still using a FCFS logic? You still can suffer on large operations because the scheduler has no ability to organize, it can only "rush from A to B" for lack of a better way to describe it. Analyze/organize/timer, execute, check timer, rinse and repeat... and none of those really have to be in that order for the deadline scheduler to be efficient!
sio is basically deadline without the ability to sort ques. Although I would put it equal to noop.
Also, deadline does have the ability to merge small sorted requests. It's just like... a 20 trick pony. That's why I use it pretty much exclusively.
When I was working on the Gingerbread project for the Nexus S, myself and 3 of my colleagues had a short discussion over some subway about which IO scheduler to set as default in the Nexus S kernel. Deadline made the most sense for Android at the time. It still does in my opinion, nothing has changed since then in terms of what else is available.
SIO is not bad, however. I would even say use it over noop, but you won't really see any real performance increase from it. The two are still =
Click to expand...
Click to collapse
Great, very convincing. My last question would be, what kind of overhead is required to sort the requests as deadline does. Is it still more desirable than the simplicity of FCFS.

themichael said:
Great, very convincing. My last question would be, what kind of overhead is required to sort the requests as deadline does. Is it still more desirable than the simplicity of FCFS.
Click to expand...
Click to collapse
The "sorting" is very minimal overhead. Basically deadline ques are sorted by their expiration time (their deadline) and their sector number. I'll try to explain this without making it sound too confusing...
Before a operation request is served it decides which queue needs attention first. Read requests by default actually have more priority but not ALL (this why ROW is weak) and are given a deadline of 500 ms versus 5 seconds for a write request.
What happens is the deadline scheduler checks if the FIRST request in the deadline queue has expired, if it has not, then it simply processes the sorted "sector batch".
Does that make sense? Basically what happens is each request is given a pre-defined deadline, and it simply checks to see if they are being processed in the time specified in the tunables write_expire and read_expire.
Too tell you what time it is without explaining how to build a clock, I'll just answer your question: Very little overhead is needed to operate this scheduler.

cun7 said:
The "sorting" is very minimal overhead. Basically deadline ques are sorted by their expiration time (their deadline) and their sector number. I'll try to explain this without making it sound too confusing...
Before a operation request is served it decides which queue needs attention first. Read requests by default actually have more priority but not ALL (this why ROW is weak) and are given a deadline of 500 ms versus 5 seconds for a write request.
What happens is the deadline scheduler checks if the FIRST request in the deadline queue has expired, if it has not, then it simply processes the sorted "sector batch".
Does that make sense? Basically what happens is each request is given a pre-defined deadline, and it simply checks to see if they are being processed in the time specified in the tunables write_expire and read_expire.
Too tell you what time it is without explaining how to build a clock, I'll just answer your question: Very little overhead is needed to operate this scheduler.
Click to expand...
Click to collapse
Great. Love it. I'll give this a try. I have always liked the philosophy behind deadline anyway.

Laying down some serious knowledge. . All this is so eloquently said.. Thank you
Sent from my SPH-L720 using Tapatalk

cun7 said:
The "sorting" is very minimal overhead. Basically deadline ques are sorted by their expiration time (their deadline) and their sector number. I'll try to explain this without making it sound too confusing...
Before a operation request is served it decides which queue needs attention first. Read requests by default actually have more priority but not ALL (this why ROW is weak) and are given a deadline of 500 ms versus 5 seconds for a write request.
What happens is the deadline scheduler checks if the FIRST request in the deadline queue has expired, if it has not, then it simply processes the sorted "sector batch".
Does that make sense? Basically what happens is each request is given a pre-defined deadline, and it simply checks to see if they are being processed in the time specified in the tunables write_expire and read_expire.
Too tell you what time it is without explaining how to build a clock, I'll just answer your question: Very little overhead is needed to operate this scheduler.
Click to expand...
Click to collapse
Dude awesome write up. Just learned a lot with this knowledge. 1 question what are your thoughts on Zen IO Scheduler
Sent from my SM-N900T using Tapatalk

Great observations.
Question:
What Rom/kernel combo do you have, currently?

flak0 - I don't know enough about "zen" to give you an opinion on it. Maybe find some technical info and post it here?
moSess - I am using Tweaked 2.0 with Beast Mode 2.35 Kernel
Sent from another galaxy

OP, what do you think about the multicore power saving option & MP-Descision? The multicore power saving option in the beast mode Kernel tries to group tasks on the least amount of cores to save battery. What I'm wondering is if it's actually worth it, because I'm not sure, but I think if tasks are being limited to 1 core than the cpu spends more time completing the task than if I left it off so isn't that worse for my battery life? & I've never really understood the whole MP-Descision thing, in the past I'd just disable it if the kernel had intelli-plug but should I leave it on if there's no intelli-plug option? I hope you clear some of this up for me, thanks!

@steezmyster
The multi core power saving option is useful, and yes it does save power overall despite the fact that you are loading 1 or 2 cores a little more. This is all the more reason to use the snappy interactive governor rather than ondemand. High CPU load is handled quicker by jumping to a mid or high frequency.
In my opinion the mpdecision deamon is useless. You will get better performance and more accurate load handling by simply allowing the CPU drivers to do their job. More responsive and less "spiking".
I have not been using mpdecision since I got my Note 3. I get great battery life and performance.
Keep it simple. Don't buy into all the buzz word hype mods that many throw around. They typically are more detrimental to performance than beneficial.
I'll add to this post later when I get back. I'm on the road now and talking to type out this reply.
Sent from another galaxy

Thanks for the reply, it was very insightful!

Related

i/o scheduler

I was wondering what the best i/o scheduler is for our device? I see everyone in DEV recommending us to use CFQ but is there any noticeable difference between any of the other i/o schedulers such as noop anticipatory and deadline?
And if anyone could point me towards somewhere where I could read and understand a bit about what an i/o scheduler is exactly...that'd be awesome. Thanks!
I have read that noop is good to use. I believe I also read that BFQ yields the best performance but only some kernels have that option.
Sent from my SPH-D700 using XDA Premium App
I have always used cfq, but its because im a linux guy.. ive recently been guided to look into noop, and i believe that it is the best for solid state or flash devices, because it leaves the performance optimization to the storage subsystem (xsr). not wasting resources doing double work.. flash drives dont need data grouped together close on a platter like physical disk to reduce seek time..
EDIT: BFQ is like an enhanced CFQ, and thanks to decad3nce for pointing me to look more into NOOP
Where does that leave deadline? I always see it as an option
Sent from my SPH-D700 using XDA Premium App
deadline is mainly used in database application servers.. it basically means each read or write transaction has a deadline.. usually 400ms on a read, and 3-5s on a write. best for high performance platter disc, and it queues the reads and writes, not really practical for a workstation, or a phone. definately not a good idea for a battery operated device imo, even with a journal, because a 3-5 sec queued write, even with a journal, which still has a commit time after that, you could lose data even with a journal.
JohnCorleone said:
I have read that noop is good to use. I believe I also read that BFQ yields the best performance but only some kernels have that option.
Sent from my SPH-D700 using XDA Premium App
Click to expand...
Click to collapse
I'm using the Genocide Kernel, and latest voltage control app, but it seems to not have BFQ as an option, so I'll give noop a try. Thanks

[REF][Super Friendly] Explanation of Governors, I/O Schedulers and Kernels [24-Nov]

Introduction
"It takes few hours to make a thread but it doesn't even take few seconds to say Thanks"- arpith.fbi
Click to expand...
Click to collapse
Code:
Don't be afraid to ask me anything.
I won't bite, but I might lick you.
Just thank me for this super brief thread.
Give credits to this thread by linking it if you're using any of my info.
Thank you to you too
Have you unlocked your bootloader of your current device ? If so, read it ! If not, learn the benifits ! :victory:
What is this thread about ? It is a very brief explanation of every governors and schedulers to let you find the best combo for your device.
I've been searching a lot about informations about Kernels, Governors, I/O Schedulers and also Android Optimization Tips. No matter its Google or XDA or other android forums. I will go into it and try the best I can to find these infos. So I thought of sharing it to here for the Xperia S, Acro S, and Ion[COLOR] users.
My main reason to share this is to benefit users for better knowledge about Kernels, Governors, I/O Schedulers and Tips on Android Optimization. I'm not aware of whether where this should be posted, its related to kernels, governors and schedulers so I think it would be best if I share it to here. Yes, I wrote it word by word with references.Happy learning. :angel:
After months on XDA, no matter its in a development forum or Off Topic forum. Users kept on asking what's this what's that. And I'm sure that not all members will understand what is it until they bump into my thread
FAQs regarding on :-
-I/O Schedulers
-Kernel Governers
-Better RAM
-Better Battery
-FAQs
*Will add more when I found something useful.
Click to expand...
Click to collapse
I do a lot of asking by PM, to learn, it doesn't matter whether its a stupid one. (People who know me understands)
With my experience and lots of asking. I managed to find a lot of infos that we can use to optimize our phone.
I will try to explain as clear as I can.
Governors :-
-Smoothass
-Smartass
-SmartassV2
-SavagedZen
-Interactivex
-Lagfree
-Minmax
-Ondemand
-Conservative
-Brazilianwax
-Userspace
-Powersave
-Performance
-Scary
-Lulzactive *
-Intellidemand *
-Badass *
-Lionheart *
-Lionheartx *
-Virtuous *
* Not enough information about it, will add it later on.
Explanation
OnDemand
Brief
Available in most kernels, and the default governor in most kernels. When the CPU load reaches a certain point, OnDemand will rapidly scale the CPU up to meet the demand, then gradually scale the CPU down when it isn't needed.
Click to expand...
Click to collapse
Review
Brief says all. By a simple explantion, OnDemand scales up to the required frequency to undergo the action you are doing and rapidly scales down after use.
Conservative
Brief
It is similar to the OnDemand governor, but will scale the CPU up more gradually to better fit demand. Conservative governor provides a less responsive experience than OnDemand, but it does save batter
Click to expand...
Click to collapse
Review
Conservative is the opposite of Interactive; it will slowly ramp up the frequency, then quickly drops the frequency once the CPU is no longer under a certain usage.
Interactive
Brief
Available in latest kernels, it is the default scaling option in some stock kernels. Interactive governor is similar to the OnDemand governor with an even greater focus on responsiveness.
Click to expand...
Click to collapse
Review
Interactive is the opposite of Conservative; it quickly scales up to the maximum allowed frequency, then slowly drops the frequency once no longer in use.
Performance
Brief
Performance governer locks the phone's CPU at maximum frequency. While this may sound like an ugly idea, there is growing evidence to suggest that running a phone at its maximum frequency at all times will allow a faster race-to-idle. Race-to-idle is the process by which a phone completes a given task. After that it returns the CPU to extremely efficient low-power state.
Click to expand...
Click to collapse
Review
Good at gaming, Really good. Disadvantages are it may damage your phone if too much usage.
Powersave
Brief
The opposite of the Performance governor, the Powersave governor locks the CPU frequency at the lowest frequency set by the user.
Click to expand...
Click to collapse
Review
Set it to your desired minimum frequency and you won't have to look for your charger for once in a while.
Scary
Brief
A new governor wrote based on Conservative with some Smartass features, it scales accordingly to Conservative's way. It will start from the bottom. It spends most of its time at lower frequencies. The goal of this is to get the best battery life with decent performance. It will give the same performance as Conservative right now.
Click to expand...
Click to collapse
Review
Hmm.. Overall I don't see any difference. After I understand its main objective. I was very curious and decided to use it again. Results are the same.. No difference. Report to me if anyone has tested this.
Userspace
Brief
Userspace is not a governor pre-set, but instead allows for non-kernel daemons or apps with root permissions to control the frequency. Commonly seen as a redundant and not useful since SetCPU and NoFrills exist.
Click to expand...
Click to collapse
Review
Highly not recommended for use.
Smartass
Brief
It is based on the concept of the Interactive governor.
Smartass is a complete rewrite of the code of Interactive. Performance is on par with the “old” minmax and Smartass is a bit more responsive. Battery life is hard to quantify precisely but it does spend much more time at the lower frequencies.
Click to expand...
Click to collapse
Review
Smartass is rather the governer that will save your battery and make use of your processor for daily use. Like the brief explantion said " Smartass will spend much more time on lower frequencies." So logically you don't need for sleep profiles anymore.
SmartassV2
Brief
Theoretically a merge of the best properties of Interactive and OnDemand; automatically reduces the maximum CPU frequency when phone is idle or asleep, and attempts to balance performance with efficiency by focusing on an "ideal" frequency.
Click to expand...
Click to collapse
Review
This is a much favourite to everybody. I believe almost everyone here is using SmartassV2. Yes, it is better than Smartass because of its speed no scaling frequencies from min to max at a short period of time.
Smoothass
Brief
A much more aggressive version of Smartass that is very quick to ramp up and down, and keeps the idle/asleep maximum frequency even lower.
Click to expand...
Click to collapse
Review
In my personal experience, this is really useful for daily use. And yes, I'm using it all the time. It may decrease your battery life. I saw it OC itself to 1.4 gHz when I set it to 1.2. Good use. Recommended.
Brazilianwax
Brief
Similar to SmartassV2. More aggressive scaling, so more performance, but less battery.
Click to expand...
Click to collapse
Review
Based on SmartassV2. But its advantage is a much more performance wise governor.
SavagedZen
Brief
Another SmartassV2 based governor. Achieves good balance between performance & battery as compared to Brazilianwax.
Click to expand...
Click to collapse
Review
Not much difference compared to SmartassV2. But it is a optimized version of it.
Lagfree
Brief
Again, similar to Smartass but based on Conservative rather than Interactive, instantly jumps to a certain CPU frequency after the device wakes, then operates similar to Conservative. However, it has been noted as being very slow when down-scaling, taking up to a second to switch frequencies.
Click to expand...
Click to collapse
Review
Used it before. Like the name of the governor, I didn't experience any lag whatsoever. Another governor based on performance, but not battery efficient.
MinMax
Brief
MinMax is just a normal governor. No scaling intermediate frequency scaling is used.
Click to expand...
Click to collapse
Review
Well.. it's too normal that I can't really say anything about it..
Interactivex
Brief
InteractiveX governor is based heavily on the Interactive governor, enhanced with tuned timer parameters to optimize the balance of battery vs performance. InteractiveX governor's defining feature, however, is that it locks the CPU frequency to the user's lowest defined speed when the screen is off.
Click to expand...
Click to collapse
Review
A better understanding from the brief to you users, this is an Interactive governor with a wake profile. More battery friendly than Interactive.
Due to current kernels doesn't have these governors. I will be delaying the explanation, its very interesting. If you want it ASAP, post below
-Lulzactive *
-Intellidemand *
-Badass *
-Lionheart *
-Lionheartx *
-Virtuous *
**********************************************************************************************************************************************************************
I/O Schedulers(thanks to droidphile)
Deadline
Goal is to minimize I/O latency or starvation of a request. The same is achieved by round robin policy to be fair among multiple I/O requests. Five queues are aggressively used to reorder incoming requests.
Advantages:
Nearly a real time scheduler.
Excels in reducing latency of any given single I/O.
Best scheduler for database access and queries.
Bandwidth requirement of a process - what percentage of CPU it needs, is easily calculated.
Like noop, a good scheduler for solid state/flash drives.
Disadvantages:
When system is overloaded, set of processes that may miss deadline is largely unpredictable.
Click to expand...
Click to collapse
Noop
Inserts all the incoming I/O requests to a First In First Out queue and implements request merging. Best used with storage devices that does not depend on mechanical movement to access data. Advantage here is that flash drives does not require reordering of multiple I/O requests unlike in normal hard drives.
Advantages:
Serves I/O requests with least number of cpu cycles. (Battery friendly?)
Best for flash drives since there is no seeking penalty.
Good throughput on db systems.
Disadvantages:
Reduction in number of cpu cycles used is proportional to drop in performance.
Click to expand...
Click to collapse
Anticipatory
Based on two facts
i) Disk seeks are really slow.
ii) Write operations can happen whenever, but there is always some process waiting for read operation.
So anticipatory prioritize read operations over write. It anticipates synchronous read operations.
Advantages:
Read requests from processes are never starved.
As good as noop for read-performance on flash drives.
Disadvantages:
'Guess works' might not be always reliable.
Reduced write-performance on high performance disks.
Click to expand...
Click to collapse
BFQ
nstead of time slices allocation by CFQ, BFQ assigns budgets. Disk is granted to an active process until it's budget (number of sectors) expires. BFQ assigns high budgets to non-read tasks. Budget assigned to a process varies over time as a function of it's behavior.
Advantages:
Believed to be very good for usb data transfer rate.
Believed to be the best scheduler for HD video recording and video streaming. (because of less jitter as compared to CFQ and others)
Considered an accurate i/o scheduler.
Achieves about 30% more throughput than CFQ on most workloads.
Disadvantages:
Not the best scheduler for benchmarking.
Higher budget assigned to a process can affect interactivity and increased latency.
Click to expand...
Click to collapse
CFQ
Completely Fair Queuing scheduler maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. Each per-process queue contains synchronous requests from processes. Time slice allocated for each queue depends on the priority of the 'parent' process. V2 of CFQ has some fixes which solves process' i/o starvation and some small backward seeks in the hope of improving responsiveness.
Advantages:
Considered to deliver a balanced i/o performance.
Easiest to tune.
Excels on multiprocessor systems.
Best database system performance after deadline.
Disadvantages:
Some users report media scanning takes longest to complete using CFQ. This could be because of the property that since the bandwidth is equally distributed to all i/o operations during boot-up, media scanning is not given any special priority.
Jitter (worst-case-delay) exhibited can sometimes be high, because of the number of tasks competing for the disk.
Click to expand...
Click to collapse
SIO
Simple I/O scheduler aims to keep minimum overhead to achieve low latency to serve I/O requests. No priority quesues concepts, but only basic merging. Sio is a mix between noop & deadline. No reordering or sorting of requests.
Advantages:
Simple, so reliable.
Minimized starvation of requests.
Disadvantages:
Slow random-read speeds on flash drives, compared to other schedulers.
Sequential-read speeds on flash drives also not so good.
Click to expand...
Click to collapse
VR
Unlike other schedulers, synchronous and asynchronous requests are not treated separately, instead a deadline is imposed for fairness. The next request to be served is based on it's distance from last request.
Advantages:
May be best for benchmarking because at the peak of it's 'form' VR performs best.
Disadvantages:
Performance fluctuation results in below-average performance at times.
Least reliable/most unstable.
Click to expand...
Click to collapse
Credits
-droidphile
-kokzhanjia
Reserved for kernel info
Many thanks for sharing your knowledge on all of this! You made it very easy to understand
Sent from my LT26i using xda app-developers app
Thank u very much!
Thanks a lot !
Sent from my Xperia S using xda premium
Thanks for gathering all this info, it is a very handy guide.
You may want to add that this all works on locked bootloader as well. The big difference is you only get the stock kernel choices & no over clock. I use conservative & cfq thru 'cpu master' my locked ION
~Jaramie
Sent from my ION
how about hotplug - pegasusq ??????? can u explain this governors ?????
Segarys said:
Many thanks for sharing your knowledge on all of this! You made it very easy to understand
Sent from my LT26i using xda app-developers app
Click to expand...
Click to collapse
davidbar93 said:
Thank u very much!
Click to expand...
Click to collapse
Xecutioner_Venom said:
Thanks a lot !
Sent from my Xperia S using xda premium
Click to expand...
Click to collapse
ToledoJab said:
Thanks for gathering all this info, it is a very handy guide.
You may want to add that this all works on locked bootloader as well. The big difference is you only get the stock kernel choices & no over clock. I use conservative & cfq thru 'cpu master' my locked ION
~Jaramie
Sent from my ION
Click to expand...
Click to collapse
Thanks
saberamani said:
how about hotplug - pegasusq ??????? can u explain this governors ?????
Click to expand...
Click to collapse
Yeah, i will add it along with other unexplained governors
Thanks for reminding..

[Q]Kern-fused need input.

Ok im looking at kernels and im not going to ask "whats the best?" but im really not understanding the difference.
What im looking for is a kernel thats stable (that seems like all of them), one that allows under-clock/volting (and any other batter saving tricks) and one that will work well with my rom (XenonHD rc3) as most of the kernels seem to be using anyrom i dont think this is an issue.
i have been using the stock kernel then tinys kernel but im wondering if Zen or Air are going to serve me better?
Here is the order im looking at things
Stability
battery
speed
cosmetics
From what i can tell the governors dont seem to matter much as long as there are a few available (performance, interactive, conservative, power-saver) and the schedulers are even less important as they can handle normal use just fine. SIO or no-op or CFQ all work just fine for me. never tried FIFO but it seems kinda restrictive when multitasking
So from a development standpoint could someone explain whats so different in TINY, ZEN, and AIR i would much appreciate your input. They all seem to start from google source, are the compiled different?
Ok so i am trying Zen and i like that the CPU can be clocked lower. but im still not sure about whats best for me. A comparrison chart would be grand but i have no idea what to compare
The major differences between kernels are what kernel version they're compiled from, what modules are compiled into the kernel, which I/O schedulers are included, and which CPU governors are included. Depending on what the kernel dev has included, the kernel tends to run better or worse on specific devices. Unfortunately, it tends to vary quite a bit even within a single device line.
Zen is the best one I've found yet for my device. Others swear by Franco, Air, Trinity, etc. It's really a matter of trial and error on a device-by-device basis.
Finally, your statement about governors and schedulers not being that important is a bit wrong, in my opinion. Schedulers are definitely the lesser of the two, but depending on your usage, you can get a little bit of an I/O performance increase by using the "right" scheduler. The same thing goes for governors. A properly tweaked governor can save a bit of battery and/or boost your performance. Just like the kernels themselves, though, it would vary device-by-device and based on the user's usage type.

[TWEAK] [GUIDE] I/O SD CARD SPEED Tuning - Test Results [UPDATED 15.07]

[UPDATED JULY, 15]
So... I was on my way to hell cleaning, tweaking and buttering my GNX.
And then I discovered the Trickster MOD!!! AWSOME MOD!!!
But I didnt find any advice on how I/O Control should be set on the GNX. (Searched the forum but nothing about this).
I found what each of the schedulers does:
SIOplus is based on SIO (2012), but has some slight modifications:
- The starved write requests counter only counts when there actually
are write requests in the queue
- Fixed the bug that the writes_starved were not initialized on init
- Implemented new tuneables
Click to expand...
Click to collapse
The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging.
The scheduler assumes that the host is definitionally unaware of how to productively re-order requests. This could be because I/O scheduling will be handled at a lower layer of the I/O stack, (at the block device, by an intelligent RAID controller, Network Attached Storage, or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network)[1] Since I/O requests are potentially re-scheduled at the lower level, resequencing IOPs at the host level can create a situation where CPU time on the host is being spent on operations that will just be undone when they reach the lower level, increasing latency/decreasing throughput for no productive reason.
Another reason is because accurate details of sector position are hidden from the host system. An example would be a RAID controller that performs no scheduling on its own. Even though the host has the ability to re-order requests and the RAID controller does not, the host systems lacks the visibility to accurately re-order the requests to lower seek time. Since the host has no way of knowing what a more streamlined queue would "look" like, it can not restructure the active queue in its image, but merely pass them onto the device that is (theoretically) more aware of such details.
Click to expand...
Click to collapse
The main goal of the Deadline scheduler is to guarantee a start service time for a request.[1] It does that by imposing a deadline on all I/O operations to prevent starvation of requests. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number.
Before serving the next request, the Deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the Deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue.
By default, read requests have an expiration time of 500 ms, write requests expire in 5 seconds.
Click to expand...
Click to collapse
CFQ places synchronous requests submitted by processes into a number of per-process queues and then allocates timeslices for each of the queues to access the disk. The length of the time slice and the number of requests a queue is allowed to submit depends on the I/O priority of the given process. Asynchronous requests for all processes are batched together in fewer queues, one per priority. While CFQ does not do explicit anticipatory I/O scheduling, it achieves the same effect of having good aggregate throughput for the system as a whole, by allowing a process queue to idle at the end of synchronous I/O thereby "anticipating" further close I/O from that process. It can be considered a natural extension of granting I/O time slices to a process.
Click to expand...
Click to collapse
ROW:- ROW stands for "READ Over WRITE"which is the main requests dispatch policy of this algorithm. The ROW IO scheduler was developed with the mobile devices needs in mind. In mobile devices we favor user experience upon everything else,thus we want to give READ IO requests as much priority as possible. In mobile devices we won¡¯t have as much parallel threads as on desktops. Usually it¡¯s a single thread or at most 2 simultaneous working threads for read & write. Favoring READ requests over WRITEs decreases the READ latency greatly.
The main idea of the ROW scheduling policy is: If there are READ requests in pipe - dispatch them but don't starve the WRITE requests too much. Bellow you´ll find a small comparison of ROW to existing schedulers. The test that was run for these measurements is parallel read and write.
Click to expand...
Click to collapse
BFQ- "bandwidth fair quing" similar to cfq but distributes work load fairly giving the exception to prioritize work or task requiring more bandwidth given the app or task has set a priority, usually asked within a program app.
Click to expand...
Click to collapse
Ok, and now that you know ALL that - try WHAT I/O Scheduler and with which Read Ahead Buffer Size? :laugh:
Obviously you don´t know, you must test it..
So I took my time and tested all the schedulers with different Read Ahead Buffer Sizes. (Tests were done with ANTUTU)
And here are my findings (higher score is better, high scores are highlighted):
!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!
This test is done with
!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!
AOKP
with
FANCY KERNEL R33 .
=================================
Most probably there will be almost identical results with other ROMs, BUT NOT also with other kernels!!!!!
=================================
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
second TEST:
!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!
I AM CURRENTLY ON
!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!
PURITY 5.7.1
with
FANCY KERNEL R35 standard .
(ondemand plus scheduler)
At the moment I have set it to BFQ with 256kb.
BTW - great Database I/O this time!!!
If you have different experiences with I/O schedulers on the GNX please share it with us
Do note that messing with the read-ahead value can actually result in gaming benchmark figures. I hope you realise what the read-ahead value actually means, and what its settings entail.
You should also take into account that each of these schedulers can be tuned (as a few users in the Franco kernel thread are doing).
Perhaps hit up that thread, try out a few of their winning values, and try the benchmark again?
Most importantly, bear in mind that a lot of these benchmarks are pointless and don't reflect real world usage. So long as your device feels smooth and responsive, it doesn't particularly matter which IO governor you use (within reason. Some are obviously better than others).
psyren said:
Do note that messing with the read-ahead value can actually result in gaming benchmark figures. I hope you realise what the read-ahead value actually means, and what its settings entail.
You should also take into account that each of these schedulers can be tuned (as a few users in the Franco kernel thread are doing).
Perhaps hit up that thread, try out a few of their winning values, and try the benchmark again?
Most importantly, bear in mind that a lot of these benchmarks are pointless and don't reflect real world usage. So long as your device feels smooth and responsive, it doesn't particularly matter which IO governor you use (within reason. Some are obviously better than others).
Click to expand...
Click to collapse
I am aware that benchmarks do not reflect real word usage. They do offer an insight though... And there is no other quantitative way to measure your tweak unfortunately, or I don´t know it
And I don´t think anybody has enought time to test each combination for at least a couple of hours just to check what the winning combination in his case is....
I´ll check the Franco forum
What a great needed thread , Thank you ,it's very helpful to me
You should clearly write that these are the results for fancy kernel, because every kernel is different so for example on franco kernel (stock not tweaked ) the results vary a lot.
done
Sent from my Galaxy Nexus using xda app-developers app
thank you very much for this thread and for your time testing this stuff... way to go man!
That must have took some time But nice work. It would be nice even more of these tests
juntulis said:
That must have took some time But nice work. It would be nice even more of these tests
Click to expand...
Click to collapse
well first of all - i did this first of all for me... but than I thought "wait! hey! I can share this with the rest of the guys that maybe searched for this for hours like me, and they did not find this.." it took me 15 minutes more to write down the test results and post this i dont know how helpfull it is... but at least i spared 10 of you some time.... so i feel the 15 minutes were well invested....
@all the developers: if u need a tester, here I am just PM me.. I am 8 hours at work and just about 10% of the time busy... and I invest some serious time of it in this GNX.... so if u need an extensiver test or so.... PM me!!!!!!
Sent from my Galaxy Nexus using xda app-developers app
Well well well...
New score for the following pair:
Fancy kernel stock (unmodified scheduler or read ahead)
Purity ROM 5.7
Score is pretty low per total also because I underclocked the CPU as you can see (I don't play any games on it, I want battery life and simple ROM, etc)
But some interesting memory scores...
PS: at least for me, Purity + Fancy is the winning combination. Tried : PA, HD revolution, JellyBeer, AOKP.
Nothing smoother.. At least till now
Sent from my Galaxy Nexus using xda app-developers app
Try testing again with AndroBench
https://play.google.com/store/apps/details?id=com.andromeda.androbench2&hl=et
It also gives random read and random write. I don't trust antutu that much to take it as gold.
sherincal said:
Try testing again with AndroBench
https://play.google.com/store/apps/details?id=com.andromeda.androbench2&hl=et
It also gives random read and random write. I don't trust antutu that much to take it as gold.
Click to expand...
Click to collapse
Challenge accepted)
I'll give it a go next week at some point (have a cisco exam knocking at the door, so not too much spare time..).
Sent from my Galaxy Nexus using xda app-developers app
sherincal said:
Try testing again with AndroBench
https://play.google.com/store/apps/details?id=com.andromedndrobench2&hl=et
It also gives random read and random write. I don't trust antutu thrat much to take it as gold.
Click to expand...
Click to collapse
and I'm just the opposite! But only cuz androb is old. probably won't make much of a difference with io testing though, as long as not comparing results between the 2 different apps.
more important is using the same benchmark app all the time for comparisons. not all are equal. testing method and testing tools must be consistent or results may get skewed.
and fwiw - running liquid and Franco 378 - not much of a difference running noop/3072 compared to row or deadline at 512. interesting for me though, best io performance was when running Franco with beanstalk. Rom makes a difference too.
I never get a database i/o score this high. Any idea why?
With noop / 3072 I am getting 410, 113, 194.
Sent from my Galaxy Nexus using xda app-developers app
Using what kernel?
Sent from my Galaxy Nexus using xda app-developers app
kreindler said:
Using what kernel?
Sent from my Galaxy Nexus using xda app-developers app
Click to expand...
Click to collapse
Fancy r33
Sent from my Galaxy Nexus using xda app-developers app
Pretty strange.... I really dont understand it...
Even if it shouldnt matter too much: what rom are you using?
Sent from my Galaxy Nexus using xda app-developers app
kreindler said:
Pretty strange.... I really dont understand it...
Even if it shouldnt matter too much: what rom are you using?
Sent from my Galaxy Nexus using xda app-developers app
Click to expand...
Click to collapse
I am using XenonHD v11.1
On Fancy r34 I am getting 405, 138, 194
Sent from my Galaxy Nexus using xda app-developers app
To me your results suggest using BFQ as it is consistently fast regardless of the settings or methods tested.
Either way, i can guarantee that those tunings are mainly fo your peace of mind (so that you know you have fine-tuned by your personal preference every corner of your phone). The differences are too small so that they make a real change in all-day usage of the phone.
Furthermore, synthetic benchmarks do not always reflect actual speed and performance of your device.
For example:
I had an htc sensation xe - which was swapped for a galaxy s2:
Similar specs, very similar synthetic benchmark results, both stock rom, rooted, stock kernel stock settings of kernel:
HUGE DIFFERENCE!!!!! THE S2 WAS BLAZING FAST COMPARED TO THE SENSATION!!!!! No lag! Etc...
It's like comparing cars: compare the F430 to the GTR:
Similar specs, huge performance difference.
This guide is only for those who take that into consideration, and with the help of MANY other mods and tweaks makes an improvement in terms of performance!!!!
STANDALONE, I'm pretty sure no improvement is visible to the human eye, even if it's the eye of an xda maniac)
Sent from my Galaxy Nexus using xda app-developers app

Cpu governor sched

I recently rooted and installed a custom Rom and ElementalEx Kernel. I noticed that the default profiler for the Pixel XL is something called sched which appears to be a fairly conservative governor. I changed it to interactive which I normally use on other phones and battery life suffered considerably. I switched it back to sched and cpu frequencies are much lower but I don't notice any drop in performance. I tried researching sched but haven't found much. Anyone know how this governor works?
Sched is the most efficient governor for our pixel.
The real question is why do you want to change the governor? If you really want some things to mess with to increase battery and/or performance then check out "L speed (boost&battery)" in the play store. And No, this isn't shameless advertising either, the dev is right here in our forums.
noidea24 said:
Sched is the most efficient governor for our pixel.
The real question is why do you want to change the governor? If you really want some things to mess with to increase battery and/or performance then check out "L speed (boost&battery)" in the play store. And No, this isn't shameless advertising either, the dev is right here in our forums.
Click to expand...
Click to collapse
It was a reflex reaction because I'm used to using Interactive and the CPU frequencies seemed really low compared with what I would see on the 6P. Those frequencies on the 6P would result in a noticable performance lag. But I guess the phone design is really different so you can't compare them that way. I'll check out L Speed. Never heard of it. Thanks.
jhs39 said:
It was a reflex reaction because I'm used to using Interactive and the CPU frequencies seemed really low compared with what I would see on the 6P. Those frequencies on the 6P would result in a noticable performance lag. But I guess the phone design is really different so you can't compare them that way. I'll check out L Speed. Never heard of it. Thanks.
Click to expand...
Click to collapse
Same. Coming from the 6p / 5x scene. The best governor was interactive, especially due to all the tweaks and changes that could be applied to the governor (like on ElementalX).
But no, pixel is almost dedicated to sched. I honestly don't think anyone else is running anything else and getting decent results from it
Pixel/XL Uses EAS so governors like Sched, Schedutil etc etc. Meanwhile every other Android device uses HMP your regular governors like Interactive, Ondemand, performance, conservative etc
So it's recommended to use Sched as the defualt but you can learn more about EAS here Also Freak07 has some great info on EAS and it's govenors here
jhs39 said:
It was a reflex reaction because I'm used to using Interactive and the CPU frequencies seemed really low compared with what I would see on the 6P. Those frequencies on the 6P would result in a noticable performance lag. But I guess the phone design is really different so you can't compare them that way. I'll check out L Speed. Never heard of it. Thanks.
Click to expand...
Click to collapse
Same. Coming from the 6p / 5x scene. The best governor was interactive, especially due to all the tweaks and changes that could be applied to the governor (like on ElementalX).
But no, pixel is almost dedicated to sched. I honestly don't think anyone else is running anything else and getting decent results from it
noidea24 said:
Same. Coming from the 6p / 5x scene. The best governor was interactive, especially due to all the tweaks and changes that could be applied to the governor (like on ElementalX).
But no, pixel is almost dedicated to sched. I honestly don't think anyone else is running anything else and getting decent results from it
Click to expand...
Click to collapse
All kinds of us are running schedutil gov on custom kernels. It's the next iteration of EAS sched gov. Better performance than sched.
Yeah same here, I would always use L Speed or some different governor on previous phones than default, cause default was always not the most efficient, both performance and battery backup wise, but on Pixel there is no need. In fact devs have suggested us to use the default Sched governor, so I am sticking with it.
Anyone on ElementalEx change any settings other than the governor to improve performance on the Pixel XL? A lot of the available settings are different than I'm used to.
Hi, everyone I'm s7edge owner would like to use a the Google pixel governor, can someone show me the direction please? Thanks
lovetv said:
Hi, everyone I'm s7edge owner would like to use a the Google pixel governor, can someone show me the direction please? Thanks
Click to expand...
Click to collapse
Not possible without an EAS compatible ROM/Kernel
ithehappy said:
Yeah same here, I would always use L Speed or some different governor on previous phones than default, cause default was always not the most efficient, both performance and battery backup wise, but on Pixel there is no need. In fact devs have suggested us to use the default Sched governor, so I am sticking with it.
Click to expand...
Click to collapse
I think the devs suggest using the sched governor because they are either too lazy or not knowledgeable enough to create other governors that are compatible with the Pixel. The sched governor is far from efficient. It's clearly a performance based governor and it allows the Pixel battery and CPU to get very hot very quickly. The Pixel CPU can get to 130F just performing what most people would consider standard tasks on their phones. I'm not even talking about gaming or anything remotely CPU intensive. How long do you think these phones are actually going to last when they heat up so much?
What's the difference between sched and schedutil? I saw schedutil is default on the ElementalX for !y Pixel 2 XL.

Categories

Resources