Related
You guys have already had a taste of this (if you're on RootzBoat) but for the rest of you, here it is.
This kernel has been a collaboration between Jake Day and myself, bringing you tons of improvements over the stock kernel.
Jake has done the majority of the code fixes, he's doing great work on this kernel.
If you want to take a look at our voltages vs the stock voltages, they're posted in the 2nd post
Change Log:
1.0.7:
Update to 3.0.18
Kanged suspend and hotplug features from InteractiveX (just the features, to use them, stay running Interactive governor)
rcu: Eliminate in_irq() checks in rcu_enter_nohz()
nohz: Remove nohz_cpu_mask
nohz: Make idle/iowait counter update conditional
nohz: Fix update_ts_time_stat idle accounting
nohz: Remove update_ts_time_stat from tick_nohz_start_idle
ARM: SMP: use a timing out completion for cpu hotplug
ARM: cache: assume 64-byte L1 cachelines for ARMv7 CPUs
ARM: vmlinux.lds.S: align the exception fixup table to a 4-byte boundary
ARM: vmlinux.lds.S: do not hardcode cacheline size as 32 bytes
scheduler: domain: init next_balance in nohz_idle_balancer with jiffies
Enable KSM
Super AMOLED Color Hack now enabled (adjustable in new RootzBoat coming)
Fixed issue with gpio pins causing wakeups without enable_irq_wake() set
Fix bug with LPDDR CLK IO for suspend and idle to save power
Added open color format definitions
Force a DPLL clkdm/pwrdm ON before a relock
Fix gains on DL1 BE so values don't get lost and screw up sound
Fixed memory leak and dm timer handling
Create sysfs entry for egl.cfg so SGX can load correct OpenGL libraries
Readahead corrected
0.9.6:
Set default HZ to 250
disable fsync (makes data writes quicker and better on battery)
improve performance of deadline io scheduler
overclock core OPP100 to 220 and gpu OPP3 to 422
adjust voltages
improve cpu transition latency
OMAP4: HWMOD: UART1: disable smart-idle.
iosched (796d511): prevent aliased requests from starving other I/O
OMAP4: PM: work around for CPU1 onlining from OFF/OSWR state
fixed mbox recovery wakeup issue
adjust DSS latency constraint for deeper power state
OMAP4: HSI: Fix for back to back CAWAKE interrupts
Smartass2 stable again
Interactive adjusted
i2c-omap: use usleep_range(), get rid out of jiffies
hrtimers: teach usleep_range() to return how many usecs was slept
PM / Hibernate: Correct additional page calculation
ARM: smp_twd: make sure timer is stopped before registration
mm: vmscan: recompute page status when putting back
mm: vmscan: check page order in isolating lru pages
ARM: hwcaps: add new HWCAP defines for ARMv7-A
ARM: hwcaps: use shifts instead of hardcoded constants
mmc: fix deadlock from mmc core when suspend the device
ARM: Fix handling of pending IRQs at request time
mmc: change mmc_delay() to use usleep_range()
mm: memcg: update the correct soft limit tree during migration
OMAP4: USB: Fixed tshut reboot issue
Use same voltage for GPU OC as 307mhz
cpufreq: hotplug: do not synch threads on jiffies
block: avoid unnecessary plug list flush
block, sx8: kill blk_insert_request()
block: simplify force plug flush code a little bit
block: avoid building too big plug list
Some md updates from Linux mainline 3.2
Commits cherry picked from Linux 3.2
sched: don't call task_group() many times in set_task_rq()
nohz: Remove ts->inidle checks before restarting the tick
Init: Multithread initcalls to auto-resolve ordering issues.
memcg: mark rcu protected member as __rcu
mm/swap: make swapin readahead skip over holes
sched: fix nohz idle load balancer issues
Video playback solved
0.8.5:
improved CPU transition latency
fixed voltages for two addition frequency slots
fixed battery drain issues (battery life should be MUCH better)
added jRCU
tweaked smartass2 and interactive governors
some code and message clean up (going LEAN!)
fixed dispc issue with NO_SLEEP
patched to 3.0.17
fixed callbacks in mgr->blank
improved I/O latency
correct manager index for VSYNC irq handler
updated cpufreq and regulator driver
added version name to kernel
0.7.1:
added two additional frequency slots (from 5 slots to 7 slots) (please read known issues in second post)
lowest slot is now 250 mhz
fixed voltage ramping for lower frequencies (will help battery life)
some OTG clock and USB_DPLL fixes
0.6.8:
Hotplug governor stabilized
Ram issue fixed
0.6.7:
Added ramconsole
Reduced power consumption when idle
(both from Jake, great changes)
0.6.5:
Userspace undervolting fixed thanks to Jake.. guys a genius
updated to 3.0.14
SLOB is default memory allocator
adjust initialization of powerstates to fix power consumption
enable OMAP4_PMD_CLKS for saveram calls
add REG support for VCORE and gate resources during suspend
Improved clock frequency selection
Higher HSI throughputs
False IO wakeup fixed
PRM Register offsets fixed
Fast ramp down allowed
SmartAss2 optimizations
Change default scheduler CFS -> Autogroup
RCU boost enabled
cpufreq updates from 3.2
Use SWSUP instead of HWSUP to improve performance and power usage
SGX Active Power Latency set to 2ms
CK1 patched in (minus BFS)
On suspend, make sure resources will be gated with regulator_suspend_calls
Fixed suspend states for VAUX3, VUSIM, VANA, VCXIO, and VDAC
vfs cache pressure = 20
dirty ratio = 90
dirty background ratio = 70
swappiness = 0
Download Link:
http://bit.ly/zFDxxV
Mirror:
http://bit.ly/xrZvCh
Older Builds can be found here:
http://4ndr01d.com/gnex/kernels/
As always, I comply with GPL.
You can find my source here:
http://bit.ly/wA84bP
If you like our work, feel free to donate to me here:
https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=ZAPY38CTG4WS4
or
Jake here:
https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=HSADFMJTE2YB8
reserved for issues..
reserved just in case... you never know
dat changelog
I'm going to flash this and give it a go.
Sent from my Galaxy Nexus using Tapatalk
LexusBrian400 said:
dat changelog
Click to expand...
Click to collapse
It's beastly.. we've done a lot of work
Dudes. By FAR the best performing kernel I've used to date. Benchmark numbers are through to roof! Stable too. No reboots or hangups up to this point.
Sent from my Galaxy Nexus using Tapatalk
Enndr said:
Dudes. By FAR the best performing kernel I've used to date. Benchmark numbers are through to roof! Stable too. No reboots or hangups up to this point.
Sent from my Galaxy Nexus using Tapatalk
Click to expand...
Click to collapse
Glad to hear you like it working on some big changes for the next update. At some point, I want to get BFS booting. It's being a pain lol
sorry for this noob questions..but it works on gsm gnex?
EDIT ALREADY TRIED..WORKS GOOD!
i like this kernel..how governor do you reccomend for stability and performance?i like the 2 freq. slots added,i don't use 250 (to small)but i like 500 mhz because i usually set (with others kernelz)700 how the minimum freq (350 gave me little lags on homecsreen after lockdown).now i can use 500 how my minumum limit freq.i'll report how it goes also on battery managment!
ps..any chance to have also 1420 mhz step how lean kernel experimental?i definitely stick with this if you consider that thing!anyway,great job man!
EDIT2 this kernel give the best score ever on antutu....7380!!!kangbang rocks!also if don't care about benchs,this score is amazing!now let'see how it goes on real use!sliding pages on homecsrenns is more fluid....hope this isn't placebo effect!
gonna test this out..
after hours of testing,it seems that i have micro lag in the drawer,also with the minum freq. upped at 500 mhz...strange,after the flash,hours ago,i don't feel that...benchmark's scores are always very high for me..i want to redownload the file and reflash.
Kinda of a noob question, im currently running the Revolution HD rom, would this work without issues or is this just for rooted stock ICS?
not sure what the problem is
I flashed this last night and it seemed fine, then I notices that the top of the phone got really hot playing a game so I stopped playing it and went to bed, I woke up and my phone wouldn't turn on and it was burning up almost to hot to touch. I had to reflash to a different kernel and when it booted the battery was only at 56% charged. I know it wasn't my phone because I havent had this happen with any other kernel I have used.
-J450N said:
Kinda of a noob question, im currently running the Revolution HD rom, would this work without issues or is this just for rooted stock ICS?
Click to expand...
Click to collapse
No. It's only for 4.0.3 roms
sert00 said:
sorry for this noob questions..but it works on gsm gnex?
EDIT ALREADY TRIED..WORKS GOOD!
i like this kernel..how governor do you reccomend for stability and performance?i like the 2 freq. slots added,i don't use 250 (to small)but i like 500 mhz because i usually set (with others kernelz)700 how the minimum freq (350 gave me little lags on homecsreen after lockdown).now i can use 500 how my minumum limit freq.i'll report how it goes also on battery managment!
ps..any chance to have also 1420 mhz step how lean kernel experimental?i definitely stick with this if you consider that thing!anyway,great job man!
EDIT2 this kernel give the best score ever on antutu....7380!!!kangbang rocks!also if don't care about benchs,this score is amazing!now let'see how it goes on real use!sliding pages on homecsrenns is more fluid....hope this isn't placebo effect!
Click to expand...
Click to collapse
The problem is, the most recommended slots for OMAP processors are 7, because of the way the registry board works. I want to play around with the frequencies to find a better spread, but most phones can't handle 1400+ anyways (mine can't, for example, so there's no real way for me to test)
edit: And thanks for letting me know it works on GSM Nex I figured it did, as I use AnyKernel updater, but haven't had that tested.
jitajt said:
I flashed this last night and it seemed fine, then I notices that the top of the phone got really hot playing a game so I stopped playing it and went to bed, I woke up and my phone wouldn't turn on and it was burning up almost to hot to touch. I had to reflash to a different kernel and when it booted the battery was only at 56% charged. I know it wasn't my phone because I havent had this happen with any other kernel I have used.
Click to expand...
Click to collapse
You caught the frequency lock bug. Seems that it's been fixed in the new version released today, though.
RM KERNEL
Just a statement regarding kernel source: The Kernel Source is of course covered under GPL version 2. Free software does NOT mean no work or time was spent working on it. I have donated a large sum of my free time to hack this kernel. If you use my modified kernel source in parts or in its entirety, I kindly ask you mention its origins and to send me a github pull request or PM whenever you find bugs or think you can help improve my kernel hack further. This way the entire community will truly benefit from the spirit of open source. Thank you
Rm -Kernel For Optimus me (pecan)
What is a Kernel?
The Kernel is the Foundation in which everything else builds upon in any software system.
NOTICE: This Kernel Only COMPATIBLE with Mine and Pax0r CM7.2 AND there Based Roms cooked roms.
Don't try to flash on stock roms or older cm7 or omgb/omfgb or cm9 roms becuase is not COMPATIBLE now with this roms
Please DO NOT use any task killers, they DO NOT improve performance nor battery life. They INTERFERE with your phone's stability (more crashes) and App compatibilities (Forced Close).
IMPORTANT NOTES
Click to expand...
Click to collapse
No Guarantees! If it kills your grandmother or your device ,I'm not responsible if
you brick your device by heavy OC, flashing, voiding your warranty,or any other pain or suffering you may feel as result of using this kernel!!! ...
Using using very high frequencies (OVER 806Mhz) is dangerous for your phone.
if you oc your phone OVER 806MHZ on my kernel then no support provided
(If you download, please hit Thanks below my post! Thank you!)
NOTE: after wipe battery,system recreate the battery stats, forcing the battery to lose its capacity, i advice you recalibrate the battery after doing that.
KNOW BUGS
Click to expand...
Click to collapse
Not All CHIPS ARE CREATED EQUAL
Download:
No Guarantees! If it kills your grandmother or your device, I am NOT responsible! If you understand this:
(If you download, please hit Thanks below my post! Thank you!)
*RC12* [STABLE] Click me
Old Downloads: Click Me
INSTALL
Click to expand...
Click to collapse
How to Flash/Install the Kernel
Root Your LG Optimus Me , Then Install Custom Recovery
Download Newer Version Of Rm 32 Kernel From Topic
Copy Zip File to Sd Card
Reboot Your Phone To Recovery Mode
Wipe Cache,Wipe Dalvik Cache And Battery
Now Install Kernel And Enjoy:laugh:
Note: After FLASHING, the first reboot may take longer than usual, please be patient... After the first reboot, it may lag during initial load (let everything finish loading). Once everything is loaded and phone is ready for use, reboot the phone a 2nd time and the lag will be gone and everything should be silky smooth...
SOURCE
Click to expand...
Click to collapse
I respect the GPL (the license covering the Linux kernel), so all the up-to-date source code for this kernel is avaiable on my github:https://github.com/kerneldevs/RM-32-kernel-pecan
My kernel is, in turn, based on the publicly-avaiable froyo kernel source from LG. You're free to fork, modify, and re-release the code as your own, but you must provide the source code for your resulting work. Doing so ensures you honor the terms of the license, but you're also giving back to the community. Basically, don't be a ****.
THANKS TO
Click to expand...
Click to collapse
drapalyuk- initial setup of pecan kernel source and for biggest work for this device
pax0r- 2nd setup of pecan kernel source and also for biggest work for this device
codeaurora forum - source and patches
Mik9-SOME PATCHES THAT I USED IN MY KERNEL
Fserve-for sharing his kernel source from his source i got some idea for this kernel
Andy572-used some patches
Tasssadar-for his kernel source based on mik9 kernel
Roqu3-for his kernel source for p350, i got a 1 fix from his source
Cyanogenmod - for sharing their kernel source code, used some 1 patches from cm kernel source.
burstlam- got i nice idea about kgsl from his zte blade source
SUPPORT
Click to expand...
Click to collapse
IF YOU LIKE MY WORK YOU CAN USE DONATE BUTTON TO SUPPORT MY WORK OR YOU CAN PRESS THANKS BUTTON TO SHOW YOUR SUPPORT .
SOME INFO OF SOME KERNEL THINGS
Click to expand...
Click to collapse
CleanCache(via ZCache backend)
ZCACHE is a compressed cache similar to ZRAM but the similarity ends there. ZCache is meant to provide as many "cleancache" pages (non-dirty or untouched "virgin" memory) to apps that request for new memory. CleanCache is very easy to allocate and no additional penalty are required to hand them out, so having more CleanCache pages will improve performance. Under heavy memory pressure, often times the kernel will NOT have enough CleanCache pages, so the kernel has to do EXTRA work to reclaim dirty cache pages and clean them for the new apps that's requesting for them. The described process creates a performance hit for the kernel and the app, so the idea is to use compression to create more CleanCache pages available for use. Of course there's a penalty to pay for using compression, but the trade-off between compression penalty and the penalty for reclaiming dirty cache pages and allocating them after cleaning is smaller for compression, so in the end, CleanCache should add more performance.
USER experience BENCHMARK ARE MOVED TO THIS LINK
MORE
Click to expand...
Click to collapse
WANT FAST NEWS ABOUT MY WORK? THEN JOIN MY FACEBOOK GROUP : https://www.facebook.com/groups/OADPROM/
If you want to donate some bucks for the work that i'm doing for LG Optimus Me, go to the my username and hit the 'donate to me' button. Otherwise I would be grateful if you can click the "Thanks" button on the bottom right of this post.
THANKS TO ALL
CHANGELOG
CHANGELOG
OLD CHANGELOG OF RM VERSIONS ARE MOVED CLICK HERE TO SEE OLD CHANGELOG
09-07-2012 RC7 http://www.mediafire.com/?sxh8wt2u1b9493t
serial: msm_serial_hs_lite: Use pm_runtime to indicate device state
mm: Make memory hotplug aware of memmap holes
mfd: Use min_uV for voltage setting
msm: timer: read clocksource from global clock variable.
msm_bus: APIs for MSM bus scaling.
arm: add ARM-specific memory low-power support
msm: rmnet: Add tailroom for sk buffer to be transmitted
msm: Add Timpani Sound Device Profile
14-07-2012 RC8 http://www.mediafire.com/?ld6lrnbxrghdewb
msm: camera: Support for Dynamic Camera Logging
add backlight driver in st1.5
msm: mfd: Use debugfs interface to allow timpani codec register access
spi_qsd: Modify timeout mechanism to check SPI state valid bit.
Define and process new type of memory tag (ATAG_MEM_RESERVED)
msm: Add XO aggregation and voting API stubs
Add tpmd_dev from the tpm-emulator source to the kernel
arm: common: CP register access tool for Read/Write to CP registers
serial: msm_serial_hs: Use runtime PM for HSUART power state transitions
21-07-2012 RC10 http://www.mediafire.com/?97g5pqr71xuuj9h
rcu: "Tiny RCU", The Bloatwatch Edition
fs: simple fsync race fix
Increase readahead value
acpuclock tweaks
axi oc back
add the Stochastic Fair Blue (SFB) network scheduler - from zachariasmaladroit
sched: Fix over-scheduling bug [author andy572]
block: introduce the BFQ I/O scheduler
block: Fix atomic functions in bfq & update bfq to v2
msm_kgsl: Fix corner cases while adding ringbuffer commands
msm_kgsl: Take the driver lock after waiting for wakeup to complete
msm_kgsl: enable writecombine
msm: 7x27: Update the SDC2 GPIO disable configs
msm: 7x27: mmc: Add platform data for dummy CMD52
usb: msm_gadget: Check both USB state and VBUS during initialization
and some more small changes, check github repo for that
25-07-2012 RC11 http://www.mediafire.com/?3l6fi81l4no860t
mmc: msm_sdcc: Enhance the current mechanism of simulating PIO interrupt
msm: socinfo: move sysdev creation outside init
fs: mark_inode_dirty barrier fix
vmalloc: remove redundant unlikely()
mm: remove likely() from mapping_unevictable()
mm: remove likely() from grab_cache_page_write_begin()
writeback: avoid unnecessary determine_dirtyable_memory call
brk: fix min_brk lower bound computation for COMPAT_BRK
mm/dmapool.c: take lock only once in dma_pool_free()
mm/dmapool.c: use TASK_UNINTERRUPTIBLE in dma_pool_alloc()
fs/select.c: fix information leak to userspace
PM: Lock PM device list mutex in show_dev_hash()
PM: Prototype the pm_generic_ operations
mmc: Attribute the IO wait time properly in mmc_wait_for_req().
Wifi fix
Last version of RM Kernel
09-08-2012 RC12 http://www.mediafire.com/?j6e21kzhdhw3x3v
revert axi oc back
revert update acpuclock
netlink: Make nlmsg_find_attr take a const nlmsghdr*.
netfilter/nf_conntrack_netlink: fix ctnetlink_parse_tuple()
net/ethernet/eth: remove deprecated: print_mac() [Marin Mitov]
ipv4/netfilter/nf_nat_standalone: workaround to make -Wswitch happy
ipv6/xfrm6_tunnel: missing middle operand
fs/ext4/move_extent: fix uninitialized start_ext.ee_block [tytso]
cpufreq: fix memory leak in cpufreq_add_dev [Xiaotian Feng]
cgroup: introduce cancel_attach() [Daisuke Nishimura]
block: rescan partitions on invalidated devices on -ENOMEDIA too
block: add proper state guards to __elv_next_request
mtd: mtdconcat: fix NAND OOB write
HERE THE INFO OF ANDROID GOV
ALL CREDITS GO TO Deedii
Android CPU governors explained
1: OnDemand
2: OndemandX
3: Performance
4: Powersave
5: Conservative
6: Userspace
7: Min Max
8: Interactive
9: InteractiveX
10: Smartass
11: SmartassV2
12: Scary
13: Lagfree
14: Smoothass
15: Brazilianwax
16: SavagedZen
17: Lazy
18: Lionheart
19: LionheartX
20: Intellidemand
21: Hotplug
1: OnDemand Governor:
This governor has a hair trigger for boosting clockspeed to the maximum speed set by the user. If the CPU load placed by the user abates, the OnDemand governor will slowly step back down through the kernel's frequency steppings until it settles at the lowest possible frequency, or the user executes another task to demand a ramp.
OnDemand has excellent interface fluidity because of its high-frequency bias, but it can also have a relatively negative effect on battery life versus other governors. OnDemand is commonly chosen by smartphone manufacturers because it is well-tested, reliable, and virtually guarantees the smoothest possible performance for the phone. This is so because users are vastly more likely to ***** about performance than they are the few hours of extra battery life another governor could have granted them.
This final fact is important to know before you read about the Interactive governor: OnDemand scales its clockspeed in a work queue context. In other words, once the task that triggered the clockspeed ramp is finished, OnDemand will attempt to move the clockspeed back to minimum. If the user executes another task that triggers OnDemand's ramp, the clockspeed will bounce from minimum to maximum. This can happen especially frequently if the user is multi-tasking. This, too, has negative implications for battery life.2: OndemandX:
Basically an ondemand with suspend/wake profiles. This governor is supposed to be a battery friendly ondemand. When screen is off, max frequency is capped at 500 mhz. Even though ondemand is the default governor in many kernel and is considered safe/stable, the support for ondemand/ondemandX depends on CPU capability to do fast frequency switching which are very low latency frequency transitions. I have read somewhere that the performance of ondemand/ondemandx were significantly varying for different i/o schedulers. This is not true for most of the other governors. I personally feel ondemand/ondemandx goes best with SIO I/O scheduler.
3: Performance Governor:
This locks the phone's CPU at maximum frequency. While this may sound like an ugly idea, there is growing evidence to suggest that running a phone at its maximum frequency at all times will allow a faster race-to-idle. Race-to-idle is the process by which a phone completes a given task, such as syncing email, and returns the CPU to the extremely efficient low-power state. This still requires extensive testing, and a kernel that properly implements a given CPU's C-states (low power states).4: Powersave Governor:
The opposite of the Performance governor, the Powersave governor locks the CPU frequency at the lowest frequency set by the user.
5:Conservative Governor:
This biases the phone to prefer the lowest possible clockspeed as often as possible. In other words, a larger and more persistent load must be placed on the CPU before the conservative governor will be prompted to raise the CPU clockspeed. Depending on how the developer has implemented this governor, and the minimum clockspeed chosen by the user, the conservative governor can introduce choppy performance. On the other hand, it can be good for battery life.
The Conservative Governor is also frequently described as a "slow OnDemand," if that helps to give you a more complete picture of its functionality.6: Userspace Governor:
This governor, exceptionally rare for the world of mobile devices, allows any program executed by the user to set the CPU's operating frequency. This governor is more common amongst servers or desktop PCs where an application (like a power profile app) needs privileges to set the CPU clockspeed.
7: Min Max
well this governor makes use of only min & maximum frequency based on workload... no intermediate frequencies are used.8: Interactive Governor:
Much like the OnDemand governor, the Interactive governor dynamically scales CPU clockspeed in response to the workload placed on the CPU by the user. This is where the similarities end. Interactive is significantly more responsive than OnDemand, because it's faster at scaling to maximum frequency.
Unlike OnDemand, which you'll recall scales clockspeed in the context of a work queue, Interactive scales the clockspeed over the course of a timer set arbitrarily by the kernel developer. In other words, if an application demands a ramp to maximum clockspeed (by placing 100% load on the CPU), a user can execute another task before the governor starts reducing CPU frequency. This can eliminate the frequency bouncing discussed in the OnDemand section. Because of this timer, Interactive is also better prepared to utilize intermediate clockspeeds that fall between the minimum and maximum CPU frequencies. This is another pro-battery life benefit of Interactive.
However, because Interactive is permitted to spend more time at maximum frequency than OnDemand (for device performance reasons), the battery-saving benefits discussed above are effectively negated. Long story short, Interactive offers better performance than OnDemand (some say the best performance of any governor) and negligibly different battery life.
Interactive also makes the assumption that a user turning the screen on will shortly be followed by the user interacting with some application on their device. Because of this, screen on triggers a ramp to maximum clockspeed, followed by the timer behavior described above.9: InteractiveX Governor:
Created by kernel developer "Imoseyon," the InteractiveX governor is based heavily on the Interactive governor, enhanced with tuned timer parameters to better balance battery vs. performance. The InteractiveX governor's defining feature, however, is that it locks the CPU frequency to the user's lowest defined speed when the screen is off.10: Smartass
Is based on the concept of the interactive governor.
I have always agreed that in theory the way interactive works – by taking over the idle loop – is very attractive. I have never managed to tweak it so it would behave decently in real life. Smartass is a complete rewrite of the code plus more. I think its a success. Performance is on par with the “old” minmax and I think smartass is a bit more responsive. Battery life is hard to quantify precisely but it does spend much more time at the lower frequencies.
Smartass will also cap the max frequency when sleeping to 352Mhz (or if your min frequency is higher than 352 – why?! – it will cap it to your min frequency). Lets take for example the 528/176 kernel, it will sleep at 352/176. No need for sleep profiles any more!"11: SmartassV2:
Version 2 of the original smartass governor from Erasmux. Another favorite for many a people. The governor aim for an "ideal frequency", and ramp up more aggressively towards this freq and less aggressive after. It uses different ideal frequencies for screen on and screen off, namely awake_ideal_freq and sleep_ideal_freq. This governor scales down CPU very fast (to hit sleep_ideal_freq soon) while screen is off and scales up rapidly to awake_ideal_freq (500 mhz for GS2 by default) when screen is on. There's no upper limit for frequency while screen is off (unlike Smartass). So the entire frequency range is available for the governor to use during screen-on and screen-off state. The motto of this governor is a balance between performance and battery.12: Scary
A new governor wrote based on conservative with some smartass features, it scales accordingly to conservatives laws. So it will start from the bottom, take a load sample, if it's above the upthreshold, ramp up only one speed at a time, and ramp down one at a time. It will automatically cap the off screen speeds to 245Mhz, and if your min freq is higher than 245mhz, it will reset the min to 120mhz while screen is off and restore it upon screen awakening, and still scale accordingly to conservatives laws. So it spends most of its time at lower frequencies. The goal of this is to get the best battery life with decent performance. It will give the same performance as conservative right now, it will get tweaked over time.13: Lagfree:
Lagfree is similar to ondemand. Main difference is it's optimization to become more battery friendly. Frequency is gracefully decreased and increased, unlike ondemand which jumps to 100% too often. Lagfree does not skip any frequency step while scaling up or down. Remember that if there's a requirement for sudden burst of power, lagfree can not satisfy that since it has to raise cpu through each higher frequency step from current. Some users report that video playback using lagfree stutters a little.14: Smoothass:
The same as the Smartass “governor” But MUCH more aggressive & across the board this one has a better battery life that is about a third better than stock KERNEL15: Brazilianwax:
Similar to smartassV2. More aggressive ramping, so more performance, less battery16: SavagedZen:
Another smartassV2 based governor. Achieves good balance between performance & battery as compared to brazilianwax.17: Lazy:
This governor from Ezekeel is basically an ondemand with an additional parameter min_time_state to specify the minimum time CPU stays on a frequency before scaling up/down. The Idea here is to eliminate any instabilities caused by fast frequency switching by ondemand. Lazy governor polls more often than ondemand, but changes frequency only after completing min_time_state on a step overriding sampling interval. Lazy also has a screenoff_maxfreq parameter which when enabled will cause the governor to always select the maximum frequency while the screen is off.18: Lionheart:
Lionheart is a conservative-based governor which is based on samsung's update3 source.
The tunables (such as the thresholds and sampling rate) were changed so the governor behaves more like the performance one, at the cost of battery as the scaling is very aggressive.19: LionheartX
LionheartX is based on Lionheart but has a few changes on the tunables and features a suspend profile based on Smartass governor.20: Intellidemand:
Intellidemand aka Intelligent Ondemand from Faux is yet another governor that's based on ondemand. Unlike what some users believe, this governor is not the replacement for OC Daemon (Having different governors for sleep and awake). The original intellidemand behaves differently according to GPU usage. When GPU is really busy (gaming, maps, benchmarking, etc) intellidemand behaves like ondemand. When GPU is 'idling' (or moderately busy), intellidemand limits max frequency to a step depending on frequencies available in your device/kernel for saving battery. This is called browsing mode. We can see some 'traces' of interactive governor here. Frequency scale-up decision is made based on idling time of CPU. Lower idling time (<20%) causes CPU to scale-up from current frequency. Frequency scale-down happens at steps=5% of max frequency. (This parameter is tunable only in conservative, among the popular governors)
To sum up, this is an intelligent ondemand that enters browsing mode to limit max frequency when GPU is idling, and (exits browsing mode) behaves like ondemand when GPU is busy; to deliver performance for gaming and such. Intellidemand does not jump to highest frequency when screen is off.
21: Hotplug Governor:
The Hotplug governor performs very similarly to the OnDemand governor, with the added benefit of being more precise about how it steps down through the kernel's frequency table as the governor measures the user's CPU load. However, the Hotplug governor's defining feature is its ability to turn unused CPU cores off during periods of low CPU utilization. This is known as "hotplugging."
Obviously, this governor is only available on multi-core devices.
=============================================
ALL CREDITS GO TO THE USERS OF XDA WHO CREATED DIFF THREADS ABOUT I/O, THIS I/O INFO FROM ALL THREADS
ALL INFO ABOUT I/O
Click to expand...
Click to collapse
I/O:- short form of Input & OutputI/O Scheduler:- Input/output (I/O) scheduling is a term used to describe the method computer operating systems decide the order that block I/O operations will be submitted to storage volumes. I/O Scheduling is sometimes called 'disk scheduling'.
I/O schedulers can have many purposes depending on the goal of the I/O scheduler, some common goals are:
- To minimize time wasted by hard disk seeks.
- To prioritize a certain processes' I/O requests.
- To give a share of the disk bandwidth to each running process.
- To guarantee that certain requests will be issued before a particular deadline.Info on I/O Scheduler
SIO:- cheduler is based on the deadline scheduler but it's more like a mix between no-op and deadline.In other words, SIO is like a lighter version of deadline but it doesn't do any kind of sorting, so it's aimed mainly for random-access devices (like SSD hard disks) where request sorting is no needed (as any sector can be accesed in a constant time, regardless of its physical location).NOOP:- The NOOP scheduler inserts all incoming I/O requests into a simple, unordered FIFO queue and implements request merging.
The scheduler assumes I/O performance optimization will be handled at some other layer of the I/O hierarchy; e.g., at the block device; by an intelligent HBA such as a Serial Attached SCSI (SAS) RAID controller or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network.ANTICIPATORY:- Anticipatory scheduling is an algorithm for scheduling hard disk input/output.
It seeks to increase the efficiency of disk utilization by "anticipating" synchronous read operations.
ADAPTIVE ANTICIPATORY SCHEDULER:- For the anticipatory scheduler, we scale up the anticipation timeout (antic expire) using the latency scaling factor over time. When the virtual disk latencies are low a small scaling of the timeout is sucient to prevent deceptive idleness, whereas when the latencies are high a larger scaling of the timeout value may be required to achieve the same. Note that such dynamic setting of the timeout value ensures that we attain a good trade-o between throughput (lost due to idling) and deceptive idleness mitigation. Setting a high value for the scaling factor (increasing idling time) only happens when the disk service latencies themselves are higher. This may not necessarily cause a signicant loss in throughput, because submitting a request from another process instead of idling is not going to improve throughput if the virtual disk itself does not get any faster than it is at the current period. A higher anticipation timeout might also be capable of absorbing process scheduling eects inside the VM. The results for the adaptive anticipatory scheduler are shown in Figure 2. The read time with our modied implementation (third bar in the dierent scheduler combi- nations) shows that it is possible to mitigate the eects of deceptive idleness by adapting the timeout. An interesting related observation is that the level to which the improve- ment is possible varies for dierent Domain-0 schedulers; noop - 39%, anticipatory - 67% and cfq - 36%. This again points to the fact that the I/O scheduler used in Domain-0 is important for the VM's ability in enforcing I/O scheduling guarantees. Dierent Domain-0 I/O schedulers likely have a dierent service latency footprint inside the VMs, contributing to dierent levels of improvement.CFQ:-CFQ, also known as "Completely Fair Queuing", is an I/O scheduler for the
Linux kernel which was written in 2003 by Jens Axboe.
CFQ works by placing synchronous requests submitted by processes into a number of per-process queues and then allocating timeslices for each of the queues to access the disk. The length of the time slice and the number of requests a queue is allowed to submit depends on the IO priority of the given process. Asynchronous requests for all processes are batched together in fewer queues, one per priority.DEADLINE:- The goal of the Deadline scheduler is to attempt to guarantee a start service time for a request. It does that by imposing a deadline on all I/O operations to prevent starvation of requests. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number.
Before serving the next request, the Deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the Deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue.V(R):- The next request is decided based on its distance from the last request, with a multiplicative penalty of `rev_penalty' applied for reversing the head direction. A rev_penalty of 1 means SSTF behaviour. As this variable is increased, the algorithm approaches pure SCAN. Setting rev_penalty to 0 forces SCAN.
SIMPLE:- Does not do any kind of sorting, as it is aimed foraleatory access devices, but it does some basic merging. We try to keep minimum overhead to achieve low latency.BFQ:- BFQ is a proportional share disk scheduling algorithm based on the slice-by-slice service scheme of CFQ. But BFQ assigns budgets, measured in number of sectors, to tasks instead of time slices. The disk is not granted to the active task for a given time slice, but until it has exahusted its assigned budget. This change from the time to the service domain allows BFQ to distribute the disk bandwidth among tasks as desired, without any distortion due to ZBR, workload fluctuations or other factors. BFQ uses an ad hoc internal scheduler, called B-WF2Q+, to schedule tasks according to their budgets. Thanks to this accurate scheduler, BFQ can afford to assign high budgets to disk-bound non-seeky tasks (to boost the throughput), and yet guarantee low latencies to interactive and soft real-time applications.
cips gokhle said:
Welcome to my RM kernel thread
About
THIS KERNEL IS BASED ON PECAN KERNEL .
RM KERNEL IS a very optimized kernel for 2.3 ROMS (in 2.2 you will face problem). i made this kernel to push performance as hard as it can.
Features & Changelog
Installation
Reboot intro recovery
Flash the latest kernel
Reboot
Enjoy
NOTE: THIS KERNEL IS ONLY FOR MY CM NIGHTLY AND PAX0R CM7.2 ROMS. DON'T FLASH ON VIVEK CM7.2,OMFGB,OMGB AND CM7.1 AND 2.2 ROMS. (FOR CM7.1,OMFGB,OMGB AND VIVEK CM7.2 I'M MAKING ANOTHER VERSION)
Downloads
V1000: http://www.mediafire.com/?aw3t3jrz99151zy
Click to expand...
Click to collapse
Goodjob bro
I will try
cooler1182 said:
Goodjob bro
I will try
Click to expand...
Click to collapse
i'm waiting for your review
I not absolutely well understand what changes installation of this kernel will make.
zizka said:
I not absolutely well understand what changes installation of this kernel will make.
Click to expand...
Click to collapse
this kernel will improve your touch screen and improve your phone performance
but about touch screen it's best work with my nightly
I made backup of data and established your kernel. Phone surprisingly quickly is loaded. Programs on a memory card need time that they could be used. Touch works also well as before. Changes didn't see. There can be I blind put on Nightly9
zizka said:
I made backup of data and established your kernel. Phone surprisingly quickly is loaded. Programs on a memory card need time that they could be used. Touch works also well as before. Changes didn't see. There can be I blind put on Nightly9
Click to expand...
Click to collapse
hmm in fb group 1 tested this and it's work for him any way in nightly 10 have update version of this kernel 2.6.32.59
now, I can mount the SD-ext with link2sd. In Fruit Ninja you feel the difference, is faster and more responsive than ever
I dont see changes. Multitouch have bug axis inversion and performance no changes for me. Thxs!
THIS KERNEL IS NOW OBSOLETE, DON'T USE IT.
newest and stable kernel releases are now integrated into my version of CYANOGENMOD 7.2
cn u just upload to some other site?? mediafire isnt working! m not able to download
ethan1234 said:
cn u just upload to some other site?? mediafire isnt working! m not able to download
Click to expand...
Click to collapse
SEE THIS
http://forum.xda-developers.com/showpost.php?p=25967572&postcount=12
Guys project restarted take test guys
this kernel is better than .35?
agen47 said:
this kernel is better than .35?
Click to expand...
Click to collapse
yes it's better it have new things that 1st time for p350
i tried both versions of this kernel and both worked well cant really tell the performance difference in vsync off, maybe in some heavy games a few more fps. atm im using vsync on on kang2 running at 806mhz no kernel panic yet
agen47 said:
i tried both versions of this kernel and both worked well cant really tell the performance difference in vsync off, maybe in some heavy games a few more fps. atm im using vsync on on kang2 running at 806mhz no kernel panic yet
Click to expand...
Click to collapse
you will be only feel diff in games on vsync off
here's the basic description about vsync:
vsync off = great for benchmarks but crap in real life.
vsync on = crap for benchmarks great in real life.
Say your screen refreshes at 60Hz - Vsync on will attempt to display 30fps to avoid tearing. 30 goes into 60 twice evenly... get it?
Vsync off will display as many fps as possible. So rather than holding back and displaying 30fps it will allow 35fps. This will cause tearing because 35 does not go into 60 evenly.
It's the same affect you get when playing video games on a PC.
So here it is. I have had the note II since a couple weeks after its release for T-Mobile USA and have loved it since, like most of you I am sure. With that being said, what is the fun of having an android phone without changing some things to make it better?! This is a kernel based off of the Jellybean kernel source straight from Samsung themselves. I finally hit a point I felt worthy of release in this kernel so am doing just that. With that being said it is a long way from where I am sure it will be in the end. I benchmarked it against the stock kernel and MB4 with much higher scores so am pleased with that along with the battery life I am experiencing with it. Hope you all enjoy it and don't be shy to post anything you would like to see added or changed in future releases of this kernel. Thank you all.
I highly recommend doing a full CWM backup of everything as if you were flashing a ROM as this will back up everything including the previous kernel being used prior to flashing this.
The Note II packs its modules with the kernel now including the very important wifi module needed to use wifi so as of now it's looking like I will have to upload multiple zips for each ROM. Just post which ROMs you are using so I can get an idea what boot.img's you guys need exactly so I can post the corresponding flashable zip. If anyone knows of a better method of doing this feel free to let me know.
Prerequisites:
- Root
- CWM Recovery
There are a few steps to flashing this like any other Android Software:
1. Download the zip that matches the version/ROM you are using
2. Place zip on the root of either your internal or external
3. Enter Recovery and perform a CWM backup (optional but highly recommended)
4. Select "flash zip" in recovery and select the zip you downloaded and placed on your sdcard
5. Reboot
Downloads Section
Mod edit: Download links and other information removed
Changelog
Kernel #3
- CRT TV OFF support
- Charge Control System implemented *thanks to Andrei for his code*
- Charge Control enabled (Fast USB Charge)
- Crude fast USB charge disabled
- Sysfs helper file added for c control
- Faster device boot time
- Sensorhub write for every boot disabled *thanks to Andrei*
- Dynamic FSync Control System implemented and enabled *thanks to Andrei for this code*
- Increased VOODOO Headset frequency
- BFQ Scheduler set to default scheduler
- Updated ck BFS kernel optimizations for speed
- BFS modifications to kernel elements still in effect
- BFS CPU Scheduler disabled for now
- CFS CPU Scheduler enabled now
Kernel #2
- Added BFS CPU scheduler! *Written by Con Kolivas thank you buddy*
- BFS 406 currently in use
- BFS patch backported manually applied successfully (no code left out)
- Read about BFS in the post below
- VOODOO enhanced sound engine added *committed by ptmr*
- VOODOO enhanced sound engine enabled
- 16GB eMMC SDS (sudden death syndrome) patch applied *thanks to samsung*
- 16GB brick fix applied
- Exynos Memory security hole fixed *thanks to andreilux for the patch*
- Faster USB charge enabled
- Added NEW BFQ v6r1 I/O Scheduler *haven't seen anyone else using r1 supposed to benchmark higher than v6*
- Added Early Queue Merge code to BFQ I/O Scheduler
- I/O context updated for BFQ
- Added ROW I/O Scheduler
- Added SIO I/O Scheduler
- Added VR I/O Scheduler
- Added ZEN I/O Scheduler
- Deadline Scheduler optimized for flash devices (our devices)
- More Deadline Scheduler optimizations
- Added Triangle Away support *thanks chainfire*
- NTFS filesystem support
- NTFS read+WRITE enabled
- CPU hyperthreading enabled
Kernel #1
- EXT partitions using relatime
- EXT support compiled into kernel, not as a module
- EXT 1/2/3 support
- EXT 4 support with backwards compatibility
- EXT 4 used for EXT 2/3 filesystems
- Added Interactive governor
- Added Conservative governor
- Overclockable up to 1.9Ghz *thanks Glewarne*
- Support for controllable voltage interface for CPU
- Reduced CPU frequency transition for snappy response time from CPU
- Optimized GPU for higher performance and longer battery life
- Added low frequencies for GPU to save battery when not doing gfx intense tasks
- Added Overclocked frequencies for GPU *thanks to Glewarne for added freqs and tables*
- Undervolted GPU to save battery life at all times
- Increased memory allocation for GPU
- Removed Mali GPU state tracking
- Reduced Mali GPU utilization calculation timeout
- Added optimized ARM RWSEM algorithm
- Enabled Swap capability
- Compiled with emu optimizations
- Extra RAM being fed to GPU
- VPN support included as module
- Included every module stock kernel does plus some extras
- Other changes made I will remember to add here
Kernel #1 Benchmark (Stock T-Mobile MB4 ROM)
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
+BFS - The Brain **** Scheduler by Con Kolivas.
+
+Goals.
+
+The goal of the Brain **** Scheduler, referred to as BFS from here on, is to
+completely do away with the complex designs of the past for the cpu process
+scheduler and instead implement one that is very simple in basic design.
+The main focus of BFS is to achieve excellent desktop interactivity and
+responsiveness without heuristics and tuning knobs that are difficult to
+understand, impossible to model and predict the effect of, and when tuned to
+one workload cause massive detriment to another.
+
+
+Design summary.
+
+BFS is best described as a single runqueue, O lookup, earliest effective
+virtual deadline first design, loosely based on EEVDF (earliest eligible virtual
+deadline first) and my previous Staircase Deadline scheduler. Each component
+shall be described in order to understand the significance of, and reasoning for
+it. The codebase when the first stable version was released was approximately
+9000 lines less code than the existing mainline linux kernel scheduler (in
+2.6.31). This does not even take into account the removal of documentation and
+the cgroups code that is not used.
+
+Design reasoning.
+
+The single runqueue refers to the queued but not running processes for the
+entire system, regardless of the number of CPUs. The reason for going back to
+a single runqueue design is that once multiple runqueues are introduced,
+per-CPU or otherwise, there will be complex interactions as each runqueue will
+be responsible for the scheduling latency and fairness of the tasks only on its
+own runqueue, and to achieve fairness and low latency across multiple CPUs, any
+advantage in throughput of having CPU local tasks causes other disadvantages.
+This is due to requiring a very complex balancing system to at best achieve some
+semblance of fairness across CPUs and can only maintain relatively low latency
+for tasks bound to the same CPUs, not across them. To increase said fairness
+and latency across CPUs, the advantage of local runqueue locking, which makes
+for better scalability, is lost due to having to grab multiple locks.
+
+A significant feature of BFS is that all accounting is done purely based on CPU
+used and nowhere is sleep time used in any way to determine entitlement or
+interactivity. Interactivity "estimators" that use some kind of sleep/run
+algorithm are doomed to fail to detect all interactive tasks, and to falsely tag
+tasks that aren't interactive as being so. The reason for this is that it is
+close to impossible to determine that when a task is sleeping, whether it is
+doing it voluntarily, as in a userspace application waiting for input in the
+form of a mouse click or otherwise, or involuntarily, because it is waiting for
+another thread, process, I/O, kernel activity or whatever. Thus, such an
+estimator will introduce corner cases, and more heuristics will be required to
+cope with those corner cases, introducing more corner cases and failed
+interactivity detection and so on. Interactivity in BFS is built into the design
+by virtue of the fact that tasks that are waking up have not used up their quota
+of CPU time, and have earlier effective deadlines, thereby making it very likely
+they will preempt any CPU bound task of equivalent nice level. See below for
+more information on the virtual deadline mechanism. Even if they do not preempt
+a running task, because the rr interval is guaranteed to have a bound upper
+limit on how long a task will wait for, it will be scheduled within a timeframe
+that will not cause visible interface jitter.
+
+
+Design details.
+
+Task insertion.
+
+BFS inserts tasks into each relevant queue as an O(1) insertion into a double
+linked list. On insertion, *every* running queue is checked to see if the newly
+queued task can run on any idle queue, or preempt the lowest running task on the
+system. This is how the cross-CPU scheduling of BFS achieves significantly lower
+latency per extra CPU the system has. In this case the lookup is, in the worst
+case scenario, O where n is the number of CPUs on the system.
+
+Data protection.
+
+BFS has one single lock protecting the process local data of every task in the
+global queue. Thus every insertion, removal and modification of task data in the
+global runqueue needs to grab the global lock. However, once a task is taken by
+a CPU, the CPU has its own local data copy of the running process' accounting
+information which only that CPU accesses and modifies (such as during a
+timer tick) thus allowing the accounting data to be updated lockless. Once a
+CPU has taken a task to run, it removes it from the global queue. Thus the
+global queue only ever has, at most,
+
+ (number of tasks requesting cpu time) - (number of logical CPUs) + 1
+
+tasks in the global queue. This value is relevant for the time taken to look up
+tasks during scheduling. This will increase if many tasks with CPU affinity set
+in their policy to limit which CPUs they're allowed to run on if they outnumber
+the number of CPUs. The +1 is because when rescheduling a task, the CPU's
+currently running task is put back on the queue. Lookup will be described after
+the virtual deadline mechanism is explained.
+
+Virtual deadline.
+
+The key to achieving low latency, scheduling fairness, and "nice level"
+distribution in BFS is entirely in the virtual deadline mechanism. The one
+tunable in BFS is the rr_interval, or "round robin interval". This is the
+maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy)
+tasks of the same nice level will be running for, or looking at it the other
+way around, the longest duration two tasks of the same nice level will be
+delayed for. When a task requests cpu time, it is given a quota (time_slice)
+equal to the rr_interval and a virtual deadline. The virtual deadline is
+offset from the current time in jiffies by this equation:
+
+ jiffies + (prio_ratio * rr_interval)
+
+The prio_ratio is determined as a ratio compared to the baseline of nice -20
+and increases by 10% per nice level. The deadline is a virtual one only in that
+no guarantee is placed that a task will actually be scheduled by this time, but
+it is used to compare which task should go next. There are three components to
+how a task is next chosen. First is time_slice expiration. If a task runs out
+of its time_slice, it is descheduled, the time_slice is refilled, and the
+deadline reset to that formula above. Second is sleep, where a task no longer
+is requesting CPU for whatever reason. The time_slice and deadline are _not_
+adjusted in this case and are just carried over for when the task is next
+scheduled. Third is preemption, and that is when a newly waking task is deemed
+higher priority than a currently running task on any cpu by virtue of the fact
+that it has an earlier virtual deadline than the currently running task. The
+earlier deadline is the key to which task is next chosen for the first and
+second cases. Once a task is descheduled, it is put back on the queue, and an
+O lookup of all queued-but-not-running tasks is done to determine which has
+the earliest deadline and that task is chosen to receive CPU next.
+
+The CPU proportion of different nice tasks works out to be approximately the
+
+ (prio_ratio difference)^2
+
+The reason it is squared is that a task's deadline does not change while it is
+running unless it runs out of time_slice. Thus, even if the time actually
+passes the deadline of another task that is queued, it will not get CPU time
+unless the current running task deschedules, and the time "base" (jiffies) is
+constantly moving.
+
+Task lookup.
+
+BFS has 103 priority queues. 100 of these are dedicated to the static priority
+of realtime tasks, and the remaining 3 are, in order of best to worst priority,
+SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority
+scheduling). When a task of these priorities is queued, a bitmap of running
+priorities is set showing which of these priorities has tasks waiting for CPU
+time. When a CPU is made to reschedule, the lookup for the next task to get
+CPU time is performed in the following way:
+
+First the bitmap is checked to see what static priority tasks are queued. If
+any realtime priorities are found, the corresponding queue is checked and the
+first task listed there is taken (provided CPU affinity is suitable) and lookup
+is complete. If the priority corresponds to a SCHED_ISO task, they are also
+taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds
+to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O. At this
+stage, every task in the runlist that corresponds to that priority is checked
+to see which has the earliest set deadline, and (provided it has suitable CPU
+affinity) it is taken off the runqueue and given the CPU. If a task has an
+expired deadline, it is taken and the rest of the lookup aborted (as they are
+chosen in FIFO order).
+
+Thus, the lookup is O in the worst case only, where n is as described
+earlier, as tasks may be chosen before the whole task list is looked over.
+
+
+Scalability.
+
+The major limitations of BFS will be that of scalability, as the separate
+runqueue designs will have less lock contention as the number of CPUs rises.
+However they do not scale linearly even with separate runqueues as multiple
+runqueues will need to be locked concurrently on such designs to be able to
+achieve fair CPU balancing, to try and achieve some sort of nice-level fairness
+across CPUs, and to achieve low enough latency for tasks on a busy CPU when
+other CPUs would be more suited. BFS has the advantage that it requires no
+balancing algorithm whatsoever, as balancing occurs by proxy simply because
+all CPUs draw off the global runqueue, in priority and deadline order. Despite
+the fact that scalability is _not_ the prime concern of BFS, it both shows very
+good scalability to smaller numbers of CPUs and is likely a more scalable design
+at these numbers of CPUs.
+
+It also has some very low overhead scalability features built into the design
+when it has been deemed their overhead is so marginal that they're worth adding.
+The first is the local copy of the running process' data to the CPU it's running
+on to allow that data to be updated lockless where possible. Then there is
+deference paid to the last CPU a task was running on, by trying that CPU first
+when looking for an idle CPU to use the next time it's scheduled. Finally there
+is the notion of "sticky" tasks that are flagged when they are involuntarily
+descheduled, meaning they still want further CPU time. This sticky flag is
+used to bias heavily against those tasks being scheduled on a different CPU
+unless that CPU would be otherwise idle. When a cpu frequency governor is used
+that scales with CPU load, such as ondemand, sticky tasks are not scheduled
+on a different CPU at all, preferring instead to go idle. This means the CPU
+they were bound to is more likely to increase its speed while the other CPU
+will go idle, thus speeding up total task execution time and likely decreasing
+power usage. This is the only scenario where BFS will allow a CPU to go idle
+in preference to scheduling a task on the earliest available spare CPU.
+
+The real cost of migrating a task from one CPU to another is entirely dependant
+on the cache footprint of the task, how cache intensive the task is, how long
+it's been running on that CPU to take up the bulk of its cache, how big the CPU
+cache is, how fast and how layered the CPU cache is, how fast a context switch
+is... and so on. In other words, it's close to random in the real world where we
+do more than just one sole workload. The only thing we can be sure of is that
+it's not free. So BFS uses the principle that an idle CPU is a wasted CPU and
+utilising idle CPUs is more important than cache locality, and cache locality
+only plays a part after that.
+
+When choosing an idle CPU for a waking task, the cache locality is determined
+according to where the task last ran and then idle CPUs are ranked from best
+to worst to choose the most suitable idle CPU based on cache locality, NUMA
+node locality and hyperthread sibling business. They are chosen in the
+following preference (if idle):
+
+* Same core, idle or busy cache, idle threads
+* Other core, same cache, idle or busy cache, idle threads.
+* Same node, other CPU, idle cache, idle threads.
+* Same node, other CPU, busy cache, idle threads.
+* Same core, busy threads.
+* Other core, same cache, busy threads.
+* Same node, other CPU, busy threads.
+* Other node, other CPU, idle cache, idle threads.
+* Other node, other CPU, busy cache, idle threads.
+* Other node, other CPU, busy threads.
+
+This shows the SMT or "hyperthread" awareness in the design as well which will
+choose a real idle core first before a logical SMT sibling which already has
+tasks on the physical CPU.
+
+Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark.
+However this benchmarking was performed on an earlier design that was far less
+scalable than the current one so it's hard to know how scalable it is in terms
+of both CPUs (due to the global runqueue) and heavily loaded machines (due to
+O lookup) at this stage. Note that in terms of scalability, the number of
+_logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x)
+quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark
+results are very promising indeed, without needing to tweak any knobs, features
+or options. Benchmark contributions are most welcome.
+
+
+Features
+
+As the initial prime target audience for BFS was the average desktop user, it
+was designed to not need tweaking, tuning or have features set to obtain benefit
+from it. Thus the number of knobs and features has been kept to an absolute
+minimum and should not require extra user input for the vast majority of cases.
+There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval
+and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition
+to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is
+support for CGROUPS. The average user should neither need to know what these
+are, nor should they need to be using them to have good desktop behaviour.
+
+rr_interval
+
+There is only one "scheduler" tunable, the round robin interval. This can be
+accessed in
+
+ /proc/sys/kernel/rr_interval
+
+The value is in milliseconds, and the default value is set to 6ms. Valid values
+are from 1 to 1000. Decreasing the value will decrease latencies at the cost of
+decreasing throughput, while increasing it will improve throughput, but at the
+cost of worsening latencies. The accuracy of the rr interval is limited by HZ
+resolution of the kernel configuration. Thus, the worst case latencies are
+usually slightly higher than this actual value. BFS uses "dithering" to try and
+minimise the effect the Hz limitation has. The default value of 6 is not an
+arbitrary one. It is based on the fact that humans can detect jitter at
+approximately 7ms, so aiming for much lower latencies is pointless under most
+circumstances. It is worth noting this fact when comparing the latency
+performance of BFS to other schedulers. Worst case latencies being higher than
+7ms are far worse than average latencies not being in the microsecond range.
+Experimentation has shown that rr intervals being increased up to 300 can
+improve throughput but beyond that, scheduling noise from elsewhere prevents
+further demonstrable throughput.
+
+Isochronous scheduling.
+
+Isochronous scheduling is a unique scheduling policy designed to provide
+near-real-time performance to unprivileged (ie non-root) users without the
+ability to starve the machine indefinitely. Isochronous tasks (which means
+"same time") are set using, for example, the schedtool application like so:
+
+ schedtool -I -e amarok
+
+This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works
+is that it has a priority level between true realtime tasks and SCHED_NORMAL
+which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie,
+if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval
+rate). However if ISO tasks run for more than a tunable finite amount of time,
+they are then demoted back to SCHED_NORMAL scheduling. This finite amount of
+time is the percentage of _total CPU_ available across the machine, configurable
+as a percentage in the following "resource handling" tunable (as opposed to a
+scheduler tunable):
+
+ /proc/sys/kernel/iso_cpu
+
+and is set to 70% by default. It is calculated over a rolling 5 second average
+Because it is the total CPU available, it means that on a multi CPU machine, it
+is possible to have an ISO task running as realtime scheduling indefinitely on
+just one CPU, as the other CPUs will be available. Setting this to 100 is the
+equivalent of giving all users SCHED_RR access and setting it to 0 removes the
+ability to run any pseudo-realtime tasks.
+
+A feature of BFS is that it detects when an application tries to obtain a
+realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the
+appropriate privileges to use those policies. When it detects this, it will
+give the task SCHED_ISO policy instead. Thus it is transparent to the user.
+Because some applications constantly set their policy as well as their nice
+level, there is potential for them to undo the override specified by the user
+on the command line of setting the policy to SCHED_ISO. To counter this, once
+a task has been set to SCHED_ISO policy, it needs superuser privileges to set
+it back to SCHED_NORMAL. This will ensure the task remains ISO and all child
+processes and threads will also inherit the ISO policy.
+
+Idleprio scheduling.
+
+Idleprio scheduling is a scheduling policy designed to give out CPU to a task
+_only_ when the CPU would be otherwise idle. The idea behind this is to allow
+ultra low priority tasks to be run in the background that have virtually no
+effect on the foreground tasks. This is ideally suited to distributed computing
+clients (like setiathome, folding, mprime etc) but can also be used to start
+a video encode or so on without any slowdown of other tasks. To avoid this
+policy from grabbing shared resources and holding them indefinitely, if it
+detects a state where the task is waiting on I/O, the machine is about to
+suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As
+per the Isochronous task management, once a task has been scheduled as IDLEPRIO,
+it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can
+be set to start as SCHED_IDLEPRIO with the schedtool command like so:
+
+ schedtool -D -e ./mprime
+
+Subtick accounting.
+
+It is surprisingly difficult to get accurate CPU accounting, and in many cases,
+the accounting is done by simply determining what is happening at the precise
+moment a timer tick fires off. This becomes increasingly inaccurate as the
+timer tick frequency (HZ) is lowered. It is possible to create an application
+which uses almost 100% CPU, yet by being descheduled at the right time, records
+zero CPU usage. While the main problem with this is that there are possible
+security implications, it is also difficult to determine how much CPU a task
+really does use. BFS tries to use the sub-tick accounting from the TSC clock,
+where possible, to determine real CPU usage. This is not entirely reliable, but
+is far more likely to produce accurate CPU usage data than the existing designs
+and will not show tasks as consuming no CPU usage when they actually are. Thus,
+the amount of CPU reported as being used by BFS will more accurately represent
+how much CPU the task itself is using (as is shown for example by the 'time'
+application), so the reported values may be quite different to other schedulers.
+Values reported as the 'load' are more prone to problems with this design, but
+per process values are closer to real usage. When comparing throughput of BFS
+to other designs, it is important to compare the actual completed work in terms
+of total wall clock time taken and total work done, rather than the reported
+"cpu usage".
Thanks I'll flash and report back. Running tweaked 2.0
Push push
Sent from my SGH-T889 using xda app-developers app
Thank you for your work
Sent from my SGH-T889 using xda premium
Which zip do we install?
does your kernel support voodoo app?
edit: No voodoo support (I have to have voodoo support)
you should also add that your kernel changes boot screen/image
fast charging over USB?
CPU voltage edit, underclock?
I saw a whole bunch of GPU "editables" I think was cool.
If you are running jellybean flash the top download in the download section. Don't forget to make a backup first
Sent from my SGH-T889 using xda app-developers app
will the King also be releasing a ROM?
We'll see but I gotta say google has done such a great job with MB4 at this time I don't see the need.
With that being said I'm going to continue work on this kernel and I'm pleased with the benchmark improvements im seeing compared to stock
Sent from my SGH-T889 using xda app-developers app
I flashed it but wasnt recognizing my exfat 64 gb sd.Going back to stock kernel
Sent from my SGH-T889 using xda premium
Thank you I will definately look into this
Have been listening to your inputs and have some nice additions for kernel #2
Sent from my SGH-T889 using xda app-developers app
AngryDinosaur said:
Thank you for your work
Sent from my SGH-T889 using xda premium
Click to expand...
Click to collapse
Hey buddy have you notice my magic trick yet?
Sent from my GT-N5110 using Tapatalk 2
Any chance u can add a dual boot with this kernel? Just wondering
Sent from my SGH-T889 using xda premium
theXeffect said:
Any chance u can add a dual boot with this kernel? Just wondering
Sent from my SGH-T889 using xda premium
Click to expand...
Click to collapse
There is someone working on that already.
http://forum.xda-developers.com/showthread.php?p=40410021
Sent from my GT-N7105 using xda premium
theXeffect said:
Any chance u can add a dual boot with this kernel? Just wondering
Sent from my SGH-T889 using xda premium
Click to expand...
Click to collapse
Yah I'll look into that friend. usually just release an aosp and sammy version same kernel as each other just ones for aosp and one is for samsung
Side note per your guys requests voodoo patch among lots of other additions coming in kernel #2 update shaping up nicely. Thanks for all your input appreciate it.
Sent from my SGH-T889 using xda app-developers app
fast charging IMHO is the most useful so the phone isn't dying while being used for GPS or whatever. Undervolt makes me nervous though. I'll watch for a bit to see if there are any reports of phones bricking before trying it. Its not as easy to swap this phone as it was with Sprint if it dies.
robl45 said:
fast charging IMHO is the most useful so the phone isn't dying while being used for GPS or whatever. Undervolt makes me nervous though. I'll watch for a bit to see if there are any reports of phones bricking before trying it. Its not as easy to swap this phone as it was with Sprint if it dies.
Click to expand...
Click to collapse
Fast charge with charge control is coming in kernel #2 (thanks to Andrei for writing that code from scratch)
As for your concern with bricking ive been using kernel #1 for months now do lots of 3d gaming and cpu + gpu intensive tasks and haven't had one reboot or any instability. You wouldn't see any adverse effects from a very slight undervolt as they still get ample juice to function properly.
Sent from my SGH-T889 using xda app-developers app
I just flashed your kernel #1 and I love it!!! Its so fast I just love it I'm on a stock samsung galaxy note 2 with stock jellybean 4.1.2 and with your kernel its rocking thank you
Sent from my SGH-T889 using Tapatalk 4 Beta
INTRODUCTION -
This guide is intended to help those who are coming to the Kernel KT747 by @ktoonsez. This thread and the subsequent Posts are intended to be as a Guide for users that are new to this Kernel and its Tweaker. Complete credit for the development of the Kernel goes to @ktoonsez. You can find his Kernel thread bellow.
KT747 - SGH-T999 Touchwiz & AOSP - Thread by Ktoonsez
Older Builds of the Kernel - Thread by @LuigiBull23
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
INTRODUCTION -
There are quiet a few times when users have asked for the choice, usage as well as definition of various parameters of the Governers and Schedulers. Hence I decided to write this guide as a way to help the users understand the various basic terms and parameters. I am perfectly aware that there are quiet a few excellent guides on the different ICS & JB Governers. As a matter of fact, I have linked to some of them. So eventhough I have given basic information on Governers in this kernel, it is not my primary purpose to serve this as the ultimate guide on that subject.
This is a living guide and given vastness of the subject, I will continue to modify the OP as well as subsequent posts.
Disclaimer:
I am not responsible if you end up with expensive brick. Read the guide as much as you want and ask questions before proceeding with overclocking.
Overclocking and undervolting is highly debatable, some say its good and some say its bad... so its upto you to proceed further. While on the word of Caution – I have personally managed to Smoke (I mean literally physically cause smoke) from a Tablet by testing SetCPU Overclocking on it.
Here’s another nice detail on why Friends don't let Friends do extreme Overclock or Undervolt! post by @dorimanx, the developer of the other excellent kernel.
PURPOSE / INTENT -
The intent of this thread is to help new users learn, and act as reference for more knowledgeable users, on the Governers & Schedulers incorporated into this Kernel. Another Purpose is to help those who are new to Overclocking & undervolting in general.
There are quite a few good guides on this forum regarding Overclocking. So Rather than writing one myself, I am going to refer to one by @bala_gamer, who has written a pretty comprehensive guide for the International version of Galaxy SIII. Even though the hardware is different between that phone and this one, the guide is good enough for those who are starting down this path and want to get basic understanding.
Another intentional purpose of this thread is to provide a platform to discuss Overclocking and undervolting settings for SGH-T999 specific version. Given that I wish offer to the experts a platform to discuss, in interest of New users and their phone, I have been careful enough to include warnings and footnotes where possible.
WHAT DO I GET BY OVERCLOCKING/UNDERVOLTING -
In short, modern Microprocessors, to a certain degree have a range of Operational frequency steps. Also especially for multiprocessor devices, it is possible to control when a processor comes into play and when it does not. Now, the main question that comes to mind is, why would you want to turn off processors? Well, consider this. It’s Kind of an analogy like a car engine. On a 6 Cylinder engine, the fuel consumption is a lot more. But if you were to turn off 2 out of the 6 cylinders, then there is still power to drive and fuel consumption is lower. Similarly, with one of the processor turned off, battery drain is reduced not to mention heat generation.
For a given processor, by design the higher the frequency it operates, the more raw power you have available to run applications. Typically on some of the previous generation single processor phones, its not possible to run Angry Birds or other games unless you run the processor consistently at its maximum operating frequency. So users may choose to Overclock the phone in order to run such Apps.
On the flip side, if you are a light user, then it will benefit if you turned off the other processor(s). This saves the battery. Given there are multitudes of Frequency steps, if the processor operates at lower frequency, there is less heat generated and less battery used. Before I proceed further, I am respectfully Quoting Castle_Bravo from here. He has summarized perfectly what I’d have said otherwise.
castle_bravo said:
In the pc world we have things like clock speeds, latency, read speed, bus speeds and things of this nature. Right now im going to talk about over clocking a processor, whether it's a gpu I or a cpu. When we over clock these devices, meaning make them go faster than originally rated by the manufacturer using software of any kind, these devices will also work harder. The faster the clock speed the hotter the component gets and the shorter it's life span is due to thermal stresses. Hence our manufacturer rating of speeds.
There are two ways to combat heat; heat is the main enemy in any high powered system (do a YouTube search of running a cpu with no heart sink). Add more cooling via more hardware and lower voltages applied to the component. Adding more cooling hardware is the preferred method. This is the best way because now that the component is working faster at the same temperatures it was at before on stock clock speeds it is, in terms of math, working LESS. This applies to ram, video card, cpu's and the like.
Typically, as you raise clock speeds you also have to RAISE voltages in order to keep it stable. There are exceptions like in a VERY minor over clock you can actually lower voltages. The trade off here is that with the larger cooling equipment and the faster clock speeds the processor will spend less time at peak load and return to idle faster. If set up correctly you can actually draw less cumulative power using higher voltages. This is assuming temperatures are the same for both scenarios of stock clock and over clock. BUT, more power applied equals more heat. Now, as we raise clock speeds and raise our voltage, we try to be on the edge of not enough. Because what does more power mean? More heat. And what does more heat mean? Less component life. Not only that but the components have a "healthy" voltage band due to tolerances in its manufacture so we don't want to exceed that.
We undervolt mainly to protect the equipment. Secondary is battery savings. We do not have the option of installing more hardware to cool our devices so all we can do is lower voltages. Lowering voltages will help keep the component cool because it is pulling less electrons. More power = more heat. the cpu will become unstable and make it work harder, which is counter productive and will have a reverse effect. So take that voltage too far down and now the component doesn't have enough power to perform its job properly or efficiently, making it work HARDER. What happens when a component works harder? It heats up. So we can actually have a reverse effect from our intended power savings.
Click to expand...
Click to collapse
Last But Not Least, here’s a nice Q&A by XDA User @droidphile for further reading. Although it is written for the Galaxy S2, quite a few parts do apply as this device too has dual core. Since that’s another SoC, apply the settings with caution if you at all wish to apply from that post.
Implementation of Overclocking & Undervolting by using KTweaker -
Given that Ktoonsez has an excellent app for this kernel; my intention is to provide a way for you to make the best usage of the same.
By default, when the kernel is freshly installed and you open the app for the first time, there should not be any error messages of any kind. The App settings are not permanent. So every time you re-boot the phone, you are restored to original Defaults. This is a great way to test out various settings to see if they work out or turn the phone into slowpoke.
As you can see in the screen shot, there are 7 major options.
GENERAL – This is the main setting area. This is where you control the phone’s operation and other settings around how it behaves. We will go into great details later on.
VOLTAGES – This allows you to control the CPU operating voltage at a given Frequency step. It plays a significant role when you are undervolting or Overclocking.
EXTRAS – Unlike the name, this section contains quite a few important settings that modify the phone’s behavior. Specifically how the phone reacts when the screen is turned off or when you are Navigating or charging the phone.
SET OPTIONS ON BOOT – As the name suggests, you get to choose if the settings you have changed are applied after you restart or not. Also if you choose to, you can also specify a time duration after Restart before your settings get applied.
BACKUP PREFS TO SDCARD – This simply allows you to make a backup of your settings to the internal SD Card. The path is /SDCARD/KTWEAKER/. You can also name the settings optionally.
RESTORE PREFS FROM SDCARD – As the name suggests, you can re-load your backed up settings. Comes in handy if you have one set of settings that really work and you want to experiment further.
LOAD DEFAULTS – This option simply sets all the settings to the value KTOONSEZ has set up in the kernel as initial values.
GENERAL SECTION -
This is perhaps the main section of the Kernel Controls. In this section you can choose wide variety of options that directly determine the performance of the phone as well as Battery life as a result.
There are several sub-menus as follows. As you can see in the screen-shot, there is a small comment on what the section does.
1. ENABLE OC STEPS
2. LOCK FREQUENCIES – There’s a little sub-section to choose minimum and maximum operating frequencies.
3. I/O SCHEDULERS
4. I/O SCHEDULER ADJUSTMENTS
5. CPU GOVERNORS
6. CPU GOVERNOR ADJUSTMENTS
7. AUTO HOTPLUG
1. ENABLE OC STEPS -
This is a simple Check box to enable Overclocking. Select this only if you are going to overclock. By doing so, you get higher range of frequencies to choose for the Minimum and Maximum frequencies. Do note, just because it lets you specify higher frequencies, does not mean you can set to the highest value. Permanently operating at Overclocked frequency may cause physical damage to your phone. Remember Qualcomm has set 1500 MHZ as the normal operating frequency for this CPU.
2. LOCK FREQUENCIES -
This is again a Check box that effectively pegs the operating frequencies to within the Range as specified by the Minimum Frequency and Maximum Frequency sliders.
The two sliders that are part of this section need to be set after very careful consideration. If you set the minimum frequency too low, then you run the risk of a sluggish and unresponsive phone when there are no apps running. For the maximum frequency, remember higher the frequency, more heat will be produced. Also the battery will drain faster. So give it some thought before setting the limits. Choose the values based on whether you are looking to save battery or get high performance and responsiveness.
Note – A side note on this, the CPU is actually located in the back, bellow the battery compartment. So you will notice the heat in that section. If you happen to have a bumper case on the phone, you won't notice actual temperature unless the phone is really hot.
7. AUTO HOTPLUG
This is a simple Checkbox that enables Hot-Plugging support. Hotpluging is a concept borrowed from Server Linux and is applicable to Android in the same manner. In short, it allows for CPUs being removed from Service or added to service on the fly, without needing OS level Restart. Effectively this, gives the kernel a choice to Take CPU Cores offline or bring them online. Some of the Governors we discussed above already support hotplugging. This checkbox ensures support for the remaining Governors.
3. I/O SCHEDULER –
Scheduler Guide Link
This menu offers a choice of all the android Schedulers available in this kernel.
Given that there are so many excellent guides on the individual Schedulers, I do not wish to provide the same information again. (Besides this is indeed a vast topic by itself.) So without going to specific details, I am going to summarize what it means from a layman's point of view. For those with more technical inclination, I have provided a link to read further on each scheduler.
Think of an I/O Scheduler like an executive assistant to the Disc or storage of the phone. Just like the assistants, it effectively manages the disc reads or writing to it for all the processes. In particular, it determines what process gets prioritized and/or bandwidth. You have to understand; that each app you run has its own process as well as child processes it triggers. In addition, the OS too has its own processes that monitor various aspects of the phone. Effectively these are all the processes competing for the reading or writing. Based on that knowledge, you can choose whatever works best for your usage pattern. Later on I will be providing some sample settings to get you started.
Noop:
The noop scheduler is the simplest of them. It is best suited for Cell phone storage since it is flash media. As the flash drives do not require rearrangement of the I/O requests, the data that come first is written first (First in First Out). It's basically not a real scheduler, as it leaves out the scheduling of the hardware.
Benefits:
- Adds all incoming I / O requests in a first-come-who-first-served queue and implements requests with the fewest number of CPU cycles, so also battery friendly
- Is suitable for flash drives because there is no search errors
- Good data throughput on db systems
- Is nearly a real-time scheduler.
- Characterized by reducing the waiting time of each process from - best scheduler for database access and queries.
- Bandwidth requirements of a process, eg what percentage CPU is used is easy to calculate.
Disadvantages:
- If the system is overloaded, it may lose a set of processes, and is not as easy to predict
- Reducing the number of CPU cycles corresponds to a simultaneous decline in performance.
Deadline:
This scheduler has the goal of reducing I/O wait time of a process. This is done using the block numbers of the data on the drive. This also controls how outlying block numbers are processed, each request receives a maximum delivery time. This is in very popular like BFQ and Deadline:
Benefits:
- Is nearly a real-time scheduler.
- Characterized by reducing the waiting time of each process from - best scheduler for database access and queries.
- Bandwidth requirements of a process, eg what percentage does a CPU is easy to calculate.
- noop is ideal for flash drives
Disadvantages:
- If the system is overloaded, can go a lost set of processes, and is not as easy to predict. It is indeed better than the BFQ, but VR is even better.
ROW:
Reference - http://lwn.net/Articles/509829/
Read Over Write Scheduler. This scheduler ignores or backpedals the disc write operations, giving higher priority to the Read Operations.
Mobile devices prefer user experience; hence, the READ IO requests get as much priority as possible. The main idea is, if there are READ requests in pipe - dispatch them but don't delay the WRITE requests too much.
All the incoming requests are kept in multiple queues according to their priority. The dispatching of requests is done in a Merry-Go-Round fashion with a different slice of time for each queue.
Presently there are 6 types of queues the requests are parked in
[FONT="]- [/FONT]High priority READ queue
[FONT="]- [/FONT]High priority Synchronous WRITE queue
[FONT="]- [/FONT]Regular priority READ queue
[FONT="]- [/FONT]Regular priority Synchronous WRITE queue
[FONT="]- [/FONT]Regular priority WRITE queue
[FONT="]- [/FONT]Low priority READ queue
If in a certain dispatch cycle one of the queues was empty and didn't use its time, that queue will be marked as "un-served". For Ex. While in the middle of executing requests of Queue Y, a request comes to queue X (X having more priority over Y), and was un-served in the previous cycle. Then queue X will be preempted over queue Y. This won't restart the cycle. Once queue Y is done with its request, scheduler will go back to X, and allow it to finish it's request, before proceeding with resto fo the queues in the cycle.
For READ request queues idling is allowed to give the application(s) a chance to add more requests. The idling is enabled if the application is making requests in rapid succession.
ROW scheduler will support special services for memory cards that
support High Priority Requests. In addition it will support rescheduling of interrupted requests. For example, while working on a long write request, a sudden high priority read request comes in, the scheduler will inform the device and the device can stop the write request to serve the high priority read request. In such a case the device may send back the interrupted write request so that the scheduler will send it later according to the scheduler policy.
CFQ:
CFQ (Completely Fair Queuing) is similar to the Deadline. It maintains a scalable continuous Process-I/O The available I/ O bandwidth is used fairly and evenly to all I/O requests to distribute. It creates a statistics of blocks and processes. This is then used to guess when the next block is requested by what process, ie each process queue contains requests of synchronous processes, which in turn is dependent upon the priority of the original process. There is a second version with some fixes, such as allowing the request to starve, and some small search backward integrated to improve the responsiveness.
Benefits:
- It has the goal of a balanced I/O performance to deliver
- The easiest way one set
- Excellent on multiprocessor systems
- Best performance of the database after the deadline
Disadvantages:
- Some reported user that the media scanning would take very long time
- The fair and even distribution of bandwidth can cause delays in the boot process.
- Jitter (worst case delay) can be caused sometimes because of the number of competing with each other process tasks
BFQ:
Requests divided into time segments as the CFQ, but on a budget. The flash drive will be granted an active process until it has exhausted its budget (number of sectors on the flash drive).
Benefits:
- Has a very good USB data transfer rate.
- Be the best scheduler for playback of HD video recording and video streaming (due to less jitter than CFQ Scheduler, and others)
- Regarded as a very precise working Scheduler
- Delivers 30% more throughput than CFQ
FIOPS:
Fair, Efficient Flash I/O Scheduler is geared for the modern Flash based storage media well. I haven’t been able to find a lot of documentation for this. Will keep looking.
SIO:
It aims to achieve with minimal effort at a low latency I / O requests. Not a priority to put in queue, instead simply merge the requests. This scheduler is a mix between the noop and deadline. There is no conversion or sorting of requests.
Benefits:
- It is simple and stable. - Minimized Starvation for inquiries
Disadvantages:
- Slow random write speeds on flash drives as opposed to other schedulers. - Sequential read speeds on flash drives, not as good
V(R):
Unlike other scheduling software, synchronous and asynchronous requests are not handled separately, but it will impose a fair and balanced within this deadline requests, that the next request to be served is a function of distance from the last request. It is a very good scheduler with elements of the deadline scheduler. He will probably be the best for MTD Android devices. It also makes the most out of the benchmark points, but is also unstable scheduler, because its performance can fluctuates below or above average.
Benefits:
- Is the best scheduler for benchmarks
Disadvantages:
- Performance variability can lead to different results
- Very often unstable
ZEN:
This Scheduler is actually based on a combination of NOOP, SIO & VR Schedulers. This scheduler combines Synchronous & Asynchronous requests with same priority. It uses a deadline in order to derive or determine priority of a process.
FIFO:
First in First Out Scheduler. As the name says, it implements a simple priority method based on processing the requests as they come in.
4. I/O SCHEDULER ADJUSTMENTS
Ref -
https://www.kernel.org/doc/Documentation/block/
http://www.linux-mag.com/id/7572/
http://algo.ing.unimo.it/people/paolo/disk_sched/description.php
The Scheduler Adjustments are parameters that determine how the selected Scheduler behaves. Needless to say the list of parameters in this menu will change depending on which Scheduler you chose in the previous step. The screenshot depicts parameters for the ROW Scheduler. Even though there are quite a few Schedulers available in this kernel, parameters of some of them tend to be similar in effect. Hence I have combined the Schedulers whose parameters are similar.
[FONT="] Deadline, SIO and Zen: [/FONT][FONT="]
fifo_batch: This parameter controls the maximum number of requests per batch.[/FONT][FONT="]It tunes the balance between per-request latency and aggregate throughput. When low latency is the primary concern, smaller is better (where a value of 1 yields first-come first-served behavior). Increasing fifo_batch generally improves throughput, at the cost of latency variation. [/FONT]The default is 16.[FONT="]
front_merges: A request that enters the scheduler is possibly contiguous to a request that is already on the queue. Either it fits in the back of that request, or it fits at the front. Hence it’s called either a back merge candidate or a front merge candidate. Typically back merges are much more common than front merges. You can set this tunable to 0 if you know your workload will never generate front merges. Otherwise leave it at its default value 1.
[/FONT][FONT="]read_expire: In all 3 schedulers, there is some form of deadline to service each Read Request. The focus is read latencies. When a read request first enters the io scheduler, it is assigned a deadline that is the current time + the read_expire value in units of milliseconds. The default value is 500 ms.
write_expire: Similar to Read_Expire, this applies only to the Write Requests. The default value is 5000 ms.[/FONT][FONT="]
writes_starved: Typically more attention is given to the Read requests over write requests. But this can’t go on forever. So after the expiry of this value, some of the pending write requests get the same priority as the Reads. Default value is 1.
This tunable controls how many read batches can be processed before processing a single write batch. The higher this is set, the more preference is given to reads.
CFQ
back_seek_max: The scheduler tries to guess that the next request for access requires going backwards from current position on the Disc. Given that such going back can be time consuming. So in anticipation, may move back on the disc prior to the next request. This setting, given in Kb, determines the max distance to go back. Default value is set to 16 Kb.
Do note that in a cellphone or tablet, the storage is actually Flash Memory technology. There is Disk head to be re-positioned. As such this is not that effective as backward reads are not that bad.
[/FONT][FONT="]back_seek_penalty: This parameter is used to compute the cost of backward seeking. If the backward distance of a request is just 1 from a front request, then the seeking cost of the two requests is considered equivalent and the scheduler will not bias toward one or the other. This parameter defaults to 2 so if the distance is only 1/2 of the forward distance, CFQ will consider the backward request to be close enough to the current head location to be “close”. Therefore it will consider it as a forward request.
fifo_expire_async & fifo_expire_sync : This particular parameter is used to set the timeout of asynchronous requests. CFQ maintains a fifo (first-in, first-out) list to manage timeout requests. The default value is 250 ms. A smaller value means the timeout is considered much more quickly than a larger value. Similarly, fifo_expire_sync applies to the Synchronous requests. The default is 125 ms.
[/FONT][FONT="]group_idle: If this is set, CFQ will idle before executing the last process issuing I/O in a cgroup. This should be set to 1 along with using proportional weight I/O cgroups and setting slice_idle to 0 as Flash memory is a fast storage mechanism.
group_isolation: If set (to 1), there is a stronger isolation between groups at the expense of throughput. If disabled, Scheduler is biased towards sequential requests. When enabled group isolation provides balance for both sequential and random workloads. The default value is 0 (disabled). [/FONT][FONT="]
low_latency: When set (to 1), CFQ attempts to build a backlog of write requests. It will give a maximum wait time of 300 ms for each process issuing I/O on a device. This offers fairness over throughput. When disabled (set to 0), it will ignore target latency, allowing each process in the system to get a full time slice. This is enabled by default.
[/FONT][FONT="]Quantum: This option controls the maximum number of requests being processed at a time. The default value is 8. Increasing the value can improve performance; the latency of some I/O may be increased due to more requests being buffered inside the storage.
[/FONT][FONT="]slice_async: This parameter controls Maximum number of asynchronous requests at a time. The default value is set to 40 ms.[/FONT][FONT="]
slice_idle: When a task has no more requests to submit in its time slice, the scheduler waits for a while before scheduling the next thread to improve locality. The default value is 0 indicating no idling. However, a zero value increases the overall number of seeks. Hence a Non-zero number may be beneficial.
slice_sync: This setting determines the time slice allotted to a process I/O. The default is 100 ms.
BFQ
timeout_sync & timeout_async: These parameters determine maximum disk time given to a task, respectively for synchronous and asynchronous queues. It allows the user to control the latencies imposed by the scheduler.
[/FONT][FONT="]max_budget: This determines, how much of the queue request is serviced based on number of sectors on disc. A larger value increases the throughput for the single tasks and for the system, in proportion to the percentage of sequential requests issued. Consequence is increasing the maximum latency a request may incur in. The default value is 0, which enables auto-tuning
[/FONT][FONT="]max_budget_async_rq: This setting determines number of async queues served for a maximum number of requests, before selecting a new queue.[/FONT][FONT="]
low_latency: When this is set to 1 (default is 1), interactive and soft real-time applications experience a lower latency.
Row:
[/FONT][FONT="]hp_read_quantum: Dispatch quantum for the high priority READ queue
rp_read_quantum: Dispatch quantum for the regular priority READ queue
hp_swrite_quantum: Dispatch quantum for the high priority Synchronous WRITE queue
rp_swrite_quantum: Dispatch quantum for the regular priority Synchronous WRITE queue
rp_write_quantum: Dispatch quantum for the regular priority WRITE queue
lp_read_quantum: Dispatch quantum for the low priority READ queue
lp_swrite_quantum: Dispatch quantum for the low priority Synchronous WRITE queue
read_idle: Determines length of idle on read queue in Msec (in case idling is enabled on that queue).
read_idle_freq: Determines the frequency of inserting READ requests that will trigger idling. This is the time in Msec between inserting two READ requests[/FONT]
5. CPU GOVERNORS –
THIS SECTION IS STILL IN PROGRESS. I WILL KEEP UPDATING.KINDLY BEAR WITH ME.
References -
http://androidforums.com/xperia-mini-all-things-root/513426-android-cpu-governors-explained.html
http://forum.xda-developers.com/showpost.php?p=28647926&postcount=1
http://pic.dhe.ibm.com/infocenter/l...ic=/liaai.cpufreq/TheConservativeGovernor.htm
http://lists.linaro.org/pipermail/linaro-kernel/2012-February/001120.html
A Governor performs a similar function for the CPU time management as the Scheduler. Originally, there were a set of Governors coming from the Linux kernel. Over the period, newer governors were introduced for Android architecture. Several developers added their own governors by modifying or tweaking existing governors.
To fully utilize the governors, you need to disable a file called mpdecision. It's located under /system/bin. It interferes with the governors operation and won't allow you to take full advantage of it's settings. Typically you can do this by renaming the file name using ES File Explorer and rebooting the phone. Note that if you use the Touchwiz JELLYBEAN version, you should rename /system/bin/qosmgr to /system/bin/qosmgr.bak .
Essentially with either file, governor’s instructions for the second CPU are overridden. By renaming them, they are not loaded at Boot. So governor’s authority is restored.
ONDEMAND -
Reference - http://pic.dhe.ibm.com/infocenter/l...ic=/liaai.cpufreq/TheConservativeGovernor.htm
The ondemand governor dynamically changes CPU frequency in response to CPU utilization. It will automatically select the highest available processor frequency when the processor load rises above value set by up_threshold. If CPU utilization rises above the up_threshold parameter, the ondemand governor increases the CPU frequency to scaling_max_freq. When CPU utilization falls below this threshold, the governor decreases the frequency in steps to run at the next lowest frequency until it reaches scaling_min_freq. After each sampling_rate milliseconds, the current CPU utilization is reexamined and the process is repeated dynamically to adjust the CPU frequency per process load. Since the governor needs time to respond, performance might be reduced if the usage changes frequently.
CONSERVATIVE –
Reference- http://pic.dhe.ibm.com/infocenter/lnxinfo/v3r0m0/topic/liaai.cpufreq/TheConservativeGovernor.htm
This governer prefers the lowest possible clock speed as often as possible. Only upon a larger persistent load on the CPU will the conservative governor raise the CPU clockspeed.
This will tend to try and keep the CPU running at lower speeds and consequently lower voltage. This inherently will conserve the battery.
Like the Ondemand governor, it steps the CPU through the operating frequencies by dynamically adjusting frequencies based on processor utilization. However, the conservative governor increases and decreases CPU speed more gradually as against the hair trigger response of OnDemand governer. This governor increases the frequency step by step upon CPU load but jumps to lowest frequency when the CPU load is removed. Thus it aims to dynamically adjust the CPU frequency to current utilization, without jumping to max frequency. If CPU utilization is above up_threshold, this governor will step up the frequency to the next highest frequency below or equal to scaling_max_freq. If CPU utilization is below down_threshold, this governor will step down the frequency to the next lowest frequency until it reaches scaling_min_freq. After each sampling_rate milliseconds, the current CPU utilization will be reexamined and the same algorithm will be applied to dynamically adjust the CPU frequency to current utilization.
Note - Depending on how the developer has implemented this governor, and the minimum clockspeed chosen by the user, you may experience some choppiness or random freeze. So you need to choose its settings more judiciously.
KTOONSERVATIVE –
As I said earlier, Ktoonservative is a Hotplug derivative of the traditional Conservative Governor. Hot plugging allows the governor to turn off second core of the processor dynamically. This maintains a healthy balance of Performance and Battery life.
I wish to respectfully quote @freecharlesmanson.
Ondemand scales to the highest frequency as soon as a load occurs. Conservative scales upward based on the frequency step variable which means for the most part will scale through every frequency to achieve the target load thresholds. What this practically means is ondemand is prone to wasting power on unneeded clock cycles. Ondemand also features something called a down differential, this variable determines how long the governor will remain at the given frequency before scaling down. Conservative does not have this, but instead relies on having a down threshold which insures that as soon as the load drops below a given variable it scales down as fast as the sampling rate allows. The result to this is a governor which attempts to keep the load level tolerable and save you battery! Now ! Ktoonservative Is that but in addition contains a hotpluging variable which determines when the second core comes online. The governor shuts the core off when it drops below the hotplug down threshold thus giving us a handle on the second performance factor in our CPUs behavior. While by default conservative is a poor performer it can be made to perform comparably to even performance governor. Here are some settings to discuss and start with. They are slightly less battery friendly under a load but very very well performing.
Click to expand...
Click to collapse
SMARTASSH3 –
This is a balanced governor that tries to balance between Performance and Battery life. This governor is based on the SmartassV2 Governor. Since it was tweaked by H3ROS, the name is modified. The V2 in turn is a derivative of the original SmartAss governor. It tries to attain an Ideal frequency by ramping up to that frequency quickly. Once reached, further ramping is done very slowly. This Ideal value is user defined in the Governor settings.
The governor also has different frequencies for Screen ON and Screen OFF states along with Sleep state.
NIGHTMARE -
This is one of the newer Performance oriented governor. It tries to reach the top frequency by scaling rapidly. Once reached, it tries to maintain the frequency as much as possible. It is based on the PegasusQ governor.
It is multi-core version of the Ondemand governor with integrated hot-plugging. Ongoing processes in the queue can run simultaneously . These processes are in a “Run Queue" queue that is ongoing. The processes are arranged according to their priority values. To ensure that each process has its fair share of resources, each is run for a certain period and then stops and placed in the queue for next turn. This continues until the processes are terminated.
DANCEDANCE –
This governor is based on the Conservative Governor. It was created by Snuzzo by modifying Ramp up rate to be higher as well as Sleep routines.
WHEATLEY –
This one took some digging as Wheatley is not a Linux Governer brought to Android. XDA Developer @phone_user implented this governer for his Samsung Galaxy Nexus Kernel.
In essence, this governer takes on a novel approach to power saving. As you may deduced so far, making the CPU operate at lowest needed frequency (like conservative does), can potentially backfire with CPU taking more time (and more consumption over time) to finish the task. So Wheatley actually targets the CPU Frequencies and its Deep Sleep State (AKA C4 State). In this state, the CPU voltage is reduced to avoid unnecessary power consumption.
So respectfully quoting him for the details.
phone_user said:
The previous benchmarks of the usage of the C4 state for different activities have shown that for 'light' tasks like browsing the internet, reading (for example emails or eBooks) and the average app the device spends about 40% of the time in C4 with acceptable average residencies of around 11ms. For more demanding tasks like games and video playback the C4 state is still being used however the efficiency is reduced due to the low average residencies of below 5ms (considering that the wakeup latency is 1.3ms).
I have run a few tests and as it turns out, for demanding tasks the efficiency of the C4 state is significantly reduced due to these low residency times (= large number of wakeups) to a point that the good old frequency scaling is indeed more efficient with larger battery savings. So unfortunately, relying on the C4 state alone for power savings for all tasks is not a good option.
However, unfortunately we also cannot simply use one the available standard governors since always try the minimize the frequency without taking account that this behaviour diminishes the efficiency of the C4 state since it hinders a proper race-to-idle. So taking advantage of this knowledge what a good governor should do, is using the maximum frequency whenever the C4 state is properly used with acceptable average residencies and only scale down when the average residencies get too low (or the C4 is not used at all, of course).
Building on the classic 'ondemand' governor I implemented this idea in my new Wheatley governor. For internet browsing the time spend in C4 has increased by 10% points and the average residency has increased by about 1ms. I guess these differences are mostly due to the different browsing behaviour (I spend the last time more multi-tabbing). But at least we can say that Wheatley does not interfere with the proper use of the C4 state during 'light' tasks. For music playback with screen off the time spend in C4 is practically unchanged, however the average residency is reduced from around 30ms to around 18ms, but this is still more than acceptable.
So the results show that Wheatley works as intended and ensures that the C4 state is used whenever the task allows a proper efficient usage of the C4 state. For more demanding tasks which cause a large number of wakeups and prevent the efficient usage of the C4 state, the governor resorts to the next best power saving mechanism and scales down the frequency. So with the new highly-flexible Wheatley governor one can have the best of both worlds.
Click to expand...
Click to collapse
ABYSSPLUG –
This is a Modifed version of the Hot Plug Governer. It is similar to the On Demand governor, but is more accurate steps through CPU frequencies depending on CPU load. Like the Hotplug governor, it turns off unused CPU cores upon low CPU utilization.
BADASS –
ASSWAX –
SlP –
PEGASUSQ –
ADAPTIVE –
INTERACTIVE –
This governor is designed for latency-sensitive workloads, such as interactive user apps. The interactive governor aims to be significantly more responsive to ramp CPU quickly up when CPU-intensive activity begins.
Existing governors sample CPU load at a particular rate, typically every X ms. This can lead to lag from the time user begins interacting with a previously-idle system until the next sample period.
The Interactive governor, instead of sampling the CPU Load, it will check whether to scale up CPU frequency immediately after CPU becomes active. This is done with a timer, that triggers within 1-2 ticks. If the CPU is very busy after becoming active, then the governer assumes the CPU to be underpowered and will ramp to MAX speed. If the CPU was not sufficiently busy to immediately ramp to MAX speed.
After this, the governor evaluates CPU load, choosing the highest value between longer-term load or the short-term load since idle exit to determine the CPU speed to ramp to.
A realtime thread is used for scaling up, giving the remaining tasks the CPU performance benefit. This is unlike existing governors which are more likely to schedule other tasks to occur after your performance starved tasks have completed.
USERSPACE -
This governer allows for more granular control over Power policy for the device. It allows any user apps to set the processor frequency. It does not dynamically change the CPU frequency or react to processor load, rather it only provides a mechanism to set the frequency through the use of the scaling_speed parameter. However, KT747, does not implement any tunable parameters for the user.
POWERSAVE -
As the name says, the only priority for this governer is to provide power saving with no regard for apps being slowed down. This can be counter intuitive since slowed down apps will take even longer time and thus drain battery further.
It sets the CPU to the value of the scaling_min_freq parameter. (Default value is the lowest available processor frequency). However, KT747 does not offer this parameter as a tunable within the KTweaker application.
PERFORMANCE -
As the name says, this governor exclusively focuses on providing consistent minimum latency. This governor sets the CPU speed to the highest available frequency. The CPU speed is always set to the frequency defined in scaling_max_freq parameter. (Default is the highest available processor frequency). However KT747 does not expose this setting via the KTweaker application.
6. CPU GOVERNOR ADJUSTMENTS
Governor Adjustments are typically parameters for a given governor that you can further tweak. There are certain Performance Scripts out there that may set some of these parameters as well. One such example is System Performance Mod Thunderbolt! By @pikachu01
Given below are some of the parameters of commonly used Governors. There are quite a lot of parameters for each governor. Having to list each one will be pretty intensive. I may choose to add these in future as time permits AND if there is a demand for it. In addition, I am adding Hide tags for each governor in order to tidy up the post.
ONDEMAND GOVERNER-
Ignore_nice_load - You can use the ignore_nice_load option to ignore all processes, that run with a positive nice value. These will not be counted toward the overall CPU utilization. Set this parameter to 1 if you do not care how long it takes for such processes to complete.
sampling_rate - Measured in us. , this is how often the kernel look at the CPU usage and make decisions on what to do about the frequency. Higher values means CPU polls less often. For lower frequencies, this could be considered an advantage since it might not jump to next frequency very often, but for higher frequencies, the scale-down time will be increased.
up_threshold - Measured in percentage 1-100, When CPU load reaches this point, governor will scale CPU up. Higher value means less responsiveness and lower values corresponds to more responsiveness at the cost of battery.
powersave_bias - Default value is 0. Setting a higher value will bias the governor towards lower frequency steps. Use this if you want CPU to spend less time on higher frequencies. A better alternative would be to underclock to a lower frequency than using powersave bias.
The powersave_bias parameter modifies the behavior of the ondemand governor to save more power by reducing the target frequency by a specified percentage. By default, the it selects the minimum processor frequency that can still complete a workload with minimal idle time. Doing so should result in the highest performance to power efficiency ratio. In some cases, you might prefer a greater emphasis on power efficiency than performance. In this case, set the powersave_bias parameter to a value between 1 and 1000 to reduce the target frequency by one-thousandth of that value. Say if you set powersave_bias to 100 it will cause a one-tenth reduction in target frequency. If the Max frequency of the device is 2 GHz, the governor instead will request 1.8GHz – a one-tenth reduction. If 1.8 GHz is an exact match with an available hardware frequency (listed in the scaling_available_freq parameter), the processor is set to this frequency. If 1.8 GHz is not available, the processor fluctuates between the closest available upper and lower frequencies for an average frequency of 1.8 GHz. The default value is 0.
sampling_down_factor - In the simplest form, sampling_down_factor determines how often CPU should stay at higher frequencies when truly busy. Default behavior is fast switching to lower frequencies (1). Having sampling_down_factor set to 1 makes no changes from existing behavior (for the non-modified ondemand), but having sampling_down_factor set to a value greater than 1 causes it to act as a multiplier for the scheduling interval for re-evaluating the load when the CPU is at its highest clock frequency (which is scaling_max_freq) due to high load. This improves performance by reducing the overhead of load evaluation and helping the CPU stay at its highest clock frequency when it is truly busy, rather than shifting back and forth in speed. This tunable has no effect on behavior at lower frequencies/lower CPU loads.
down_differential - This factor indirectly calculate the 'down-threshold' of Ondemand. After completing sampling-down-factor*sampling-rate at max frequency because of high load, governor samples the load again to calculate an estimate of the new target frequency in a way that the lowest frequency will be chosen that would not trigger up_threshold in the next sample. Because triggering up-threshold will again cause CPU to scale up to max frequency. During this choice down_differential is taken into account as a breathing room value. Target frequency is calculated as max_load_freq / (up_threshold - down_differential). The obtained value might be a non-existent value in the freq_table and CPU driver will round it off to a value in freq_table. max_load_freq is the theoretical frequency at which CPU can handle 100% workload. It is usually a value below scaling_max_freq. See this post by AndereiLux for more info.
freq_step - Whenever up-scaling logic is triggered the governor instructs the CPU to raise its frequency by freq_step percentage of max allowed frequency. (max policy * (freq step / 100)). Ex: max policy is 1600 and freq step 21%, it will scale 1600 * 21% = 336. We have a 100MHz grained frequency table so it rounds up to the next 100MHz, hence 336 becomes 400. So say we're idling at 200MHz and the up-scaling logic gets triggered with the above settings, the next frequency will be 600MHz. Note that freq_step and smooth_scaling does pretty much the same thing.
SMARTASSV2 GOVERNER –
awake_ideal_freq - The frequency until which CPU is scaled up rapidly on screen-awake (from sleep). Thereafter, scaling up is less aggressive.
sleep_ideal_freq - The frequency until which CPU is scaled down rapidly when screen is turned off. Thereafter, scaling down is less aggressive.
up_rate_us - The minimum amount of time to spend at a frequency before we can ramp up. (Ignored below awake-ideal frequency since governor needs to rapidly scale up to awake_ideal_freq when below it)
down_rate_us - The minimum amount of time to spend at a frequency before we can ramp down. (Ignored above sleep-ideal frequency since governor needs to rapidly scale down to sleep_ideal_freq when above it)
max_cpu_load - Same as up_threshold in other governors.
min_cpu_load - Same as down_threshold in other governors.
ramp_down_step - Frequency delta when ramping down below the ideal frequency. Zero disables and will calculate ramp down according to load heuristic. When above the ideal frequency we always ramp down to the ideal freq.
ramp_up_step - Frequency when ramping up above the ideal frequency. Zero disables and causes to always jump straight to max frequency. When below the ideal frequency we always ramp up to the ideal freq.
sleep_wakeup_freq - The frequency to set when waking up from sleep. When sleep_ideal_freq=0 this will have no effect.
KTOONSERVATIVE & CONSERVATIVE GOVERNER-
Boost_2nd_Core_On_Button -
This configuration option when set, allows you to turn the second Core ON with Back+Home+Menu button combo.
Boost_CPU - @KToonsez hasn't documented much on this setting. But based on my experiments, I feel that this, specifies the frequency to which the second Core is set when turned on by the button combo above.
Boost_GPU - Similar to Boost_CPU, this will set the frequency of operation of the GPU when the second core is turned on.
Boost_Hold_Cycles -
This setting specifies the duration for which the Core 2 will be kept on. A value of 22 translates as 1 second.
Boost_Turn_on_2nd_Core -
When set, this setting will make second core turn on immediately on touch.
CPU_Down_Block_Cycles -
This setting is used to counteract the effects of hot plugging. It specifies duration for which the CPU Cycles are throttled before hot plugging the second core out.
Disable_Hotplug_BT -
As the name suggests, when set. this setting will stop the second core from being turned off when Bluetooth connection is active.
Disable_Hotplugging -
When set the entire process of hot plugging is turned off.
freq_step - Defines how much (as a percentage of the maximum CPU speed) the conservative governor will increase the CPU speed by each time the CPU load reaches the Up Threshold.
sampling_down_factor & sampling_rate- The sampling_down_factor value acts as a negative multiplier of sampling_rate to reduce the frequency that the scheduler samples the CPU utilization. For example, if you set sampling_rate to 10,000 and sampling_down_factor to 2, the scheduler samples the CPU utilization every 20,000 microseconds.
freq_step - The freq_step parameter changes the size of the frequency step that the governor uses to change CPU frequency in either direction. By default this setting is 5, which means the governor will change the CPU frequency by five percent of the maximum or minimum frequency each time it changes frequencies. If you set this value to 100, the governor will behave exactly like the ondemand governor and immediately increase to the highest speed.
ignore_nice_load - You can set the ignore_nice_load option to ignore all processes that run with a positive nice value will not be counted toward the overall CPU utilization. Hence will not cause the CPU frequency to increase and might take longer to complete. When set to 0 (the default), all processes are counted toward the CPU utilization value. When set to 1, niced processes are ignored.
No_2nd_CPU_Screen_Off -
As the name says, setting this to 1 (default value), will turn off second core on your device. For those with more than 2 cores, there will be corresponding settings for 3rd and 4th core.
Sampling_Down_Factor - This parameter controls the rate at which the
kernel makes a decision on when to decrease the frequency while running
at top speed. When set to 1, decisions to re-evaluate the CPU load, are made at the same interval regardless of current clock speed. But when set to greater than 1 (e.g. 2 Default value) it acts as a multiplier for the scheduling interval for reevaluating load when the CPU is at its top speed due to high load.
This improves performance by reducing the overhead of load evaluation and helping the CPU stay at its top speed when truly busy, rather than shifting back and forth in speed. This tunable has no effect on behavior at lower speeds/lower CPU loads.
Sampling_Rate - This is measured in uS (10^-6 seconds). That is how often the kernel will poll the CPU usage and make decisions on what to do about the frequency. It's default value is 25000.
Sampling_Rate_Min - As the name states, this value provides a minimum limit on the Sampling_Rate. This is based on the Hardware Latency and Kernel variables. Default value is 10000.
Sampling_Rate_Screen_Off - As the name suggests, this is the value of Sampling_Rate when the screen is turned off. Default Value 40000.
Up_Threshold - It specifies what the average CPU usage between the samplings of 'sampling_rate' needs to be for the kernel to determine if it should increase the frequency. For example when it is set to '70', between the checking intervals the CPU needs to be average more than 70% in order to determine that the CPU frequency needs to be increased.
Up_threashold_Hotplug - As the name suggests, this value determines when to bring the second Core Online. It is done when the CPU load reached this %.
INTERACTIVE GOVERNER-
hispeed_freq - An intermediate "hi speed" at which to initially ramp when CPU load hits the value specified in go_hispeed_load. If load stays high for the amount of time specified in above_hispeed_delay, then speed may be bumped higher. Default is maximum speed.
Above_hispeed_delay - Once speed is set to hispeed_freq, wait for this long before bumping speed higher in response to continued high load. Default is 20000 uS.
go_hispeed_load - Go to hi speed when CPU load at or above this value. (Similar to Up-Threshold in other governors). The CPU load at which to ramp to the intermediate "hi speed". Default is 85%.
min_sample_time - The minimum amount of time to spend at the current frequency before ramping down. This is to ensure that the governor has seen enough historic cpu load data to determine the appropriate workload. Default is 80000 uS.
timer_rate - The sample rate of the timer used to increase frequency. It reevaluates cpu load when the system is not idle. Default is 20000 uS.
input_boost: If non-zero, boost speed of all CPUs to hispeed_freq on touchscreen activity. Default is 0.
boost: If non-zero, immediately boost speed of all CPUs to at least hispeed_freq until zero is written to this attribute. If zero, allow CPU speeds to drop below hispeed_freq according to load as usual.
boostpulse: Immediately boost speed of all CPUs to hispeed_freq for min_sample_time, after which speeds are allowed to drop below hispeed_freq according to load as usual.
WHEATLEY GOVERNER -
target_residency - The minimum average residency in µs which is considered acceptable for a proper efficient usage of the C4 state. Default is 10000 = 10ms.
allowed_misses - The number sampling intervals in a row the average residency is allowed to be lower than target_residency before the governor reduces the frequency. This ensures that the governor is not too aggressive in scaling down the frequency and reduces it just because some background process was temporarily causing a larger number of wakeups. The default is 5.
VOLTAGES SECTION –
This Section lists all the possible Operating Frequencies of the Processor of the phone. The Mili Volt (MV) define the operating power in Volts of the Processor at that frequency. Some basic facts for you to understand. The Frequencies determine how fast the Processor will be operating. The voltages determine Juice provided to the processor. As a processor consumes higher voltage, it will generate more heat. So this section is very important and critical to everyone who wishes to either Overclock or Undervolt. Overclocking is a term used to determine how the operating Frequencies are controlled in order to obtain maximum performance and fastest response time. As we saw in the General Section, the Enable Overclock checkbox allows you to push the boundaries of the operating frequencies. So the processor will offer better performance, but will also generate more heat due to higher operating voltages. So in this section you will need to cool down the processor by applying lower operating voltage. This however should not be confused with undervolting.
Undervolting is a concept that is used to obtain highest amount of Battery life. As the operating voltage is the major consumer of the battery, lowering operating voltages in steps of 25 will allow the processor to operate at that frequency with a possible Lag. Weather you do notice the lag or not is dependent on how much the voltage is lowered. It is also dependent on the Max & Min Operating frequencies you chose in the General Section.
Having said that, on this screen, you can press the menu button to get a new menu. This menu will allow you to modify voltages set at each frequency step.
The option to Load Stock table allows you to reset to Default voltage values in case you wish to revert. Rest of the options to add or Subtract will let you change all steps in bulk. So for ex. the option to add 5 Volts to all steps will add 5 mv to current voltage for that step. The settings option does not seem to do anything.
[FONT="]NOTE – Even though the options are in VOLTS, they should read Milli Volts.[/FONT]
EXTRAS -
Eventhough this is called EXTRA, it actually has quite a few important options. Chief among these, is the ability to set different Governors under certain circumstances as well as setting a different upper limit on frequency.
SCREEN OFF PROFILE Mhz –
As the name says, this determines the upper limit on Operating Frequency when the Screen is Off. Thus it will override what you set on General screen. This is good to have if you want the frequency further throttled when you are not using or just have different frequency for background apps when not using.
NOTE – If you set screen turn off time too low, and the screen turns off when you are reading something; you will have unexpected consequences. Not to mention battery or smoothness hit.
DISABLE SCREEN OFF Mhz CALL –
This is a further addition to the Screen off Profile discussed above. When set, this option will apply the Screen Off Profile when you are on a call. Thus applying the frequency you set in the previous option. So you need to do this carefully in order not to get your phone unresponsive or FC’d in the middle of a call.
SCREEN OFF PROFILE GOV. –
As the name says, you get to set a different governer when screen is off. This will override what you chose in the governer choice. Pretty nifty arrangement so that you can flip from a performance governer when on screen and a power save governer when screen is off. Keep in mind the time out screen off when you are reading without interacting.
SCREEN OFF PROFILE SCHED –
Similar to the Governer, this will let you choose a different Scheduler for when the screen is off. This will override (when screen is off) what you set previously.
GPS PROFILE GOV –
Similar to Screen off, this will set a Governer choice that comes into play when you are navigating or have the GPS on. If, some of you tend to keep the GPS on permanently then keep in mind that your main choice of governer will be permanently overridden.
GPS PROFILE SCHED –
Similar to the Governer, you get to choose which Scheduler comes into play when you are navigating. Keep in mind that, during navigation, the phone will keep reading from the Google Map Cache or any other Navigation product you may be using. So choose Scheduler appropriately.
BLUETOOTH PROFILE MHZ –
This setting determines the MINIMUM operating frequency when the phone is paired over Bluetooth to another device. This is unlike the Screen off option where the Max frequency is determined.
FAST CHARGING –
One of the coolest feature of the Kernel. When set, the phone will charge off of the PC USB ports as if it is connected to wall outlet. This does turn off your access to the phone internal memory and SD card. If you want to access the internal storage on PC then you have to turn this off.
NOTE – Weather to turn on or off, has to be done before connecting to PC. Changing this after connecting has no effect.
VIBRATION STRENGTH –
This Kernel parameter is actually a multiplier. It actually determines the intensity of vibration when the phone is in vibrate mode or ring Plus Vibrate Mode. It also determines vibrations of the notifications you receive. (It is possible it also determines In-App or In-Game vibrations. I did not test). It’s a good thing to control as some of the roms have very low vibration intensity out of box. Do note that vibrations do chew up your battery. So don’t set it too high. Based on my experiments, the out of box setting at 120 seems good enough.
SWIPE 2 WAKE –
This is an Interesting concept. If this is set, then you get to short circuit the process of waking the phone. What you need to do is slide through the capacitive buttons on your phone as if you are sliding the screen to unlock. This bypasses the step where you press power or home button to wake to lock screen and then actually unlock the phone.
Given that my phone SGH-T999 does not have all Capacitive buttons, it does not seem to work. Besides I have secured pin on my lock screen so it won’t unlock by this method either. (I don’t find it that useful either.)
INTERNAL READ AHEAD, EXTERNAL READ AHEAD –
Both of these parameters control the size of the buffer. Internal refers to your Internal SD Card & External refers to the MicroSD card. Do note, the buffer resides in RAM. So if you set it too high then you won’t have free RAM to play with. Also This must be used with a judicious choice of the Scheduler.
GPU GOVERNER –
This is probably of interest only to the gamers or Graphic intensive app users. Similar to the previous governer choices for CPU, this option allows you to further tweak the Governer choice for your GPU. It only affects the Graphics displayed. So unless you have graphic intensive app running, you won’t notice the difference. By default it will use the Governer set for the CPU.
TRINITY COLORS –
This of this, like the Gamma control on your TV or a monitor. This determines the basic color pallet on your phone. Think of it as if you applied a color filter to the screen. Based on what value you set here, effect will be immediate.
CONGESTION CONTROL –
First of the TCP/IP network performance parameter. TCP Congestion Control determines which algorithm is applied for the network congestion avoidance. You have two choices. Cubic & Reno. Cubic is less aggressive and Reno is more aggressive. Suffice it to say that pretty much every one will have their own performance. So there is no recommendation. For the more geeky minded, here’s the Wikipedia link.
TIME WAIT RECYCLE –
Second of the TCP/IP network performance parameter. This parameter determines, how long will the system wait before it will recycle a connection in wait state. It will benefit those on Wireless or high speed data plans. Default is Enabled. So if you have perennial bad performance on high speed connection, you can turn it off.
TIME WAIT REUSE –
Third of the TCP/IP network performance parameter. Similar to Recycle above, this too controls the time before the system reuses a connection. This too is set to Enable. Turn it off if you have connectivity issues.
SHOW TOAST MESSAGES –
This controls weather Kernel response messages are displayed on screen. These are either for the automated actions or response to changes you made. Default is enabled.
ENABLE KTWIDGET TIMER –
To be honest, I have no idea what this does. If someone is willing to share, I will be more than happy to add.
BATTERY MHZ CONTROL –
This actually has its own sub menu. Effectively it allows you to throttle down the CPU frequency if the battery is too low or is high. You also get to define what is the low level and what is the high level. Lastly, you can turn it off while charging.
NOTE – This has a direct conflict with what frequencies you have set on the main screen. So use judiciously.
KTHERMAL CONTROL –
This too has its own sub menu with several options. Effectively it allows you to throttle down the CPU and GPU in case the phone has heated, as the warning correctly says on the submenu, if you don’t set it correctly, you will damage the processor.
The options are pretty much self explanatory. Just to keep the noobs from killing their phones, I am not explaining each option. If there is high demand, I will add individual description here.
GET LAST_KMSG, GENERATE A DMESG, GENERATE A LOGCAT –
[FONT="]All three of these are KTOONSEZ’s way of grabbing error messages in case stuff happens. These are saved as text files on your internal sd card. You will only need these options if you are trying to identify a possible kernel issue.[/FONT]
SET OPTIONS ON BOOT -
This option allows you to choose when to apply the settings you have provided here. You may apply them immediately after booting or wait for some time before applying. If you are testing some exotic setting, choose to apply with delay so that you have time to revert to stock.
BACKUP PREFS TO SDCARD –
Pretty much self explanatory. It exports the settings to internal SD Card under the path you specify.
RESTORE PREFS FROM SDCARD –
Same as above, allows you to restore settings previously exported.
LOAD DEFAULTS –
Allows you to set default values of the kernel. These are the values, KTOONSEZ has set for the kernel. Safe spot to run to if you managed to mess up the settings.
That pretty much concludes all the options on the KTweaker app.
RECOMMENDED SETTINGS -
This section is intended to provide basic stable profiles that have been tested repeatedly. These profiles would help beginners to get started in the direction they wish to go. Of course they may not be the best in that class. But then no two phones are same so what works for one may not work for others.
Note-
For those who wish to further Battery life, you may do well to visit this thread on Eliminating Google Services Wakelock.
The general process for using these files is as follows.
1. Download the file(s) to your phone. In case of .BIN files, optionally rename as .TXT
2. Copy the file(s) to /SDCARD/KTweaker folder with file Manager of your choice.
3. Open Ktweaker app, click on Import Settings.
4. The file you just copied should be listed there.Choose the one you want to apply.
5. After applying, make sure Set Options on Boot Setting on main Menu of the KTweaker app has a little green text bellow confirming that the settings will be applied upon reboot.
6. Profit !
If you are hungry for more or wish to tinker further, head over to the Team Kernerlizer Threads. Links are given bellow. (Hidden in order to tidy up the thread.)[/COLOR]
Team Kernalizer Galaxy SIII threads by Carrier -
Team Kernalizer Thread for T Mobile Galaxy SIII / D2TMO - Thread Link
Team Kernalizer Thread for Sprint Galaxy SIII / D2SPR - Thread Link
Team Kernalizer Thread for AT&T Galaxy SIII / D2ATT - Thread Link
Team Kernalizer Thread for Verizon Galaxy SIII / D2VZW - Thread Link
Given bellow are some of the tried and tested profiles.
1. Conservative Battery Saver Profile -
Conservative Balanced Settings by @LuigiBull23
Settings File is for AOSP version of the Kernel. Attached to this post - ROW-Balanced_Bull_v2.txt
The battery life for him with these settings can be seen bellow.
I on the other hand had a little bit better luck with my light to moderate use. (Hidden to tidy up the thread).
NOTE - I had accidentally connected the phone to Laptop for a minute or 2 when Battery was at 12 % (Fast Charge was ON).
Do Please note, there is an additional experimental profile called Bless the Child V3 by @LuigiBull23 that Ihave attached bellow. Try if you wish. I will post the results of my test after next Charge Cycle.
LuigiBull23 said:
ROW Balanced Bull v3
***Reported to have resolved issues with battery drain, overheating, and random reboots!***
General
Locked Frequencies
CPU (MIN): 135Mhz
CPU (MAX): 1404Mhz
Scheduler: ROW
Scheduler Adjustments:
> hp_swrite_quantum = 3
> low_starv_limit = 8000
> rd_idle_data = 5
> rd_idle_data_freq = 15
> reg_starv_limit = 4000
> rp_swrite_quantum = 2
> rp_write_quantum = 2
Governor: Ktoonservative
Governor Adjustments:
> boost_cpu = 1026
> boost_hold_cycles = 18
> boost_turn_on_2nd_core = 0
> down_threshold = 58
> down_threshold_hotplug = 65
> freq_step = 2
> sampling_down_factor = 2
> sampling_rate = 25000
> sampling_rate_screen_off = 40000
> up_threshold = 70
> up_threshold_hotplug = 80
Voltages
CPU: -30mV across the board
GPU: -50mV across the board
Extras
Screen Off Profile Mhz: 378
Screen Off Profile Gov: Same as selected governor
Screen Off Profile Sched: Same as selected scheduler
Miscellaneous Section:
> Vibration Strength: 60
SD Card Section:
> Internal Read Ahead = 2048
> External Read Ahead = 2048
Battery Mhz Control:
> Battery Level Low: 20
> CPU Mhz for Low Level Battery: 1080Mhz
[/CODE]
Click to expand...
Click to collapse
2. Extreme UNDERVOLTING PROFILE -
Extreme Undervolting without Lag by @iamikon
iamikon said:
Forget that try this! SIO-NoCleverName-AOSP
http://db.tt/3dcMQCz0
Click to expand...
Click to collapse
NOTE - You may need to up voltage by 50 mV if you continue to experience lag or Freeze.
Settings File for AOSP Version of the Kernel is attached - sionoclevernameaosp.txt
3. Gamer (Or Game intensive) PROFILE -
Thanks to @RErick, here's a good stable setup for those who wish to play Graphic Intensive (Shadowgun DeadZone ) games on their phones.
Obviously, you won't be expecting outstanding battery life with intense gaming. (Can I get Prius Gas Mileage from a Corvette ?) But if you do, @RErick has graced us with this Profile. Do note this second profile may potentially lag under heavy Graphics.
But
@Perseus71
http://lwn.net/Articles/509829/
Info on ROW scheduler
castle_bravo said:
@Perseus71
http://lwn.net/Articles/509829/
Info on ROW scheduler
Click to expand...
Click to collapse
Thanks Castle. Appreciate the link.
It's about damn time!! lol I've been waiting for a guide like this that works in correlation to the Ktweaker app.. It will definitely benefit to newcomers as well as current users of this kernel.
Great guide! Thanks buddy :good:
Great guide, excellent work! Subscribed!
Great work my friend will be adding this to all the tk threads if you ever need anything just give a buzz we would glad to help anyway possible and again great write up amd guilde very informative
http://pbr1202.photobucket.com/albums/bb374/TexasEpic/Requested Banners/SPH-L720v3_zps61b75aad.png
Look good friend. Keep up the good work. I will link in my threads too
Hi nice work there, but is there any latest settings for tw version ? Or have i missed it ?
Rayfucious said:
Hi nice work there, but is there any latest settings for tw version ? Or have i missed it ?
Click to expand...
Click to collapse
I personally use AOSP Roms. So I can't translate them to Touchwiz. However, I have given the individual setting Values for the Balanced Profile. You can enter these values into KTweaker to get the file.
Perseus71 said:
I personally use AOSP Roms. So I can't translate them to Touchwiz. However, I have given the individual setting Values for the Balanced Profile. You can enter these values into KTweaker to get the file.
Click to expand...
Click to collapse
So i can enter the above values into TW version and compatible ? cause i thought aosp kernel settings and tw would be different.
Can't find this 2 settings in ktweaker for tw version though.
up_threshold_hotplug = 80
down_threshold_hotplug = 62
The rest are fine though. Will be trying out and see how it goes.
This is the new refined home for DarkRoom Development. If you submit bug reports without a log, you may be prosecuted...or executed.
Disclaimer:
If your device fails to comply with your standards of what you consider functioning, I am not liable. This is provided free of charge and does not come with a warranty. Although, if you provide a log, I can provide some sort of assurance that I will look into your issue.
Links:
Social:
Twitter - http://twitter.com/DespairDev
G+ Community - https://plus.google.com/u/0/communities/117685307734094084120
Telegram - https://t.me/darkroomdev
Discord - https://discord.gg/BGTFutW
Downloads:
https://go.hunternott.com/darkroom
Source:
Github – https://github.com/matthewdalex/
Github – https://github.com/UBERROMS/
Credits:
faux123
franco
Google
flar2
imoseyon
Cl3Kener
neobuddy89
Star Wars
XDA:DevDB Information
[KERNEL(Nougat)][ROM]Kylo Kernel/UBERSTOCK, ROM for the Huawei Nexus 6P
Contributors
DespairFactor, Cl3Kener
Source Code: https://github.com/UBERROMS
ROM OS Version: 6.0.x Marshmallow
ROM Kernel: Linux 3.10.x
Based On: AOSP
Version Information
Status: Testing
Created 2015-11-18
Last Updated 2017-12-28
Packet Schedulers/Congestion Avoidance Algorithms:
CDG vs. Cubic vs. Westwood:
CDG
CAIA-Delay Gradient (CDG) is a hybrid congestion control algorithm which reacts to both packet loss and inferred queuing delay. It attempts to operate as a delay-based algorithm where possible, but utilises heuristics to detect loss-based TCP cross traffic and will compete effectively as required. CDG is therefore incrementally deployable and suitable for use on shared networks. During delay-based operation, CDG uses a delay-gradient based probabilistic backoff mechanism, and will also try to infer non congestion related packet losses and avoid backing off when they occur. During loss-based operation, CDG essentially reverts to reno-like behaviour. CDG switches to loss-based operation when it detects that a configurable number of consecutive delay-based backoffs have had no measurable effect. It periodically attempts to return to delay-based operation, but will keep switching back to loss-based operation as required.
Cubic
CUBIC is an enhanced version of BIC: it simplifies the BIC window control and improves its TCP-friendliness and RTT-fairness. The window growth function of CUBIC is governed by a cubic function in terms of the elapsed time since the last loss event. Our experience indicates that the cubic function provides a good stability and scalability. Furthermore, the real-time nature of the protocol keeps the window growth rate independent of RTT, which keeps the protocol TCP friendly under both short and long RTT paths.
Westwood
TCP Westwood estimates the available bandwidth by counting and filtering the flow of returning ACKs and adaptively sets the cwnd and the sshtresh after congestion by taking into account the estimated bandwidth.TCP Westwood, is a sender-side-only modification to TCP New Reno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes) and with dynamic load (dynamic pipes). TCP Westwood+ is an evolution of TCP Westwood, in fact it was soon discovered that the Westwood bandwidth estimation algorithm did not work well in the presence of reverse traffic due to ACK compression. Westwood+ is friendly towards TCP Reno and fairer than Reno in bandwidth allocation.
Packet Schedulers:
Why use a non default packet scheduler?
Packet schedulers are a portion of the kernel that queues network data on a specific interface and governs how they are transmitted and received including buffers. Below I will breakdown a couple of the packet schedulers included in this kernel.
fq_codel
FQ_Codel (Fair Queuing Controlled Delay) is queuing discipline that combines Fair Queuing with the CoDel AQM scheme. FQ_Codel uses a stochastic model to classify incoming packets into different flows and is used to provide a fair share of the bandwidth to all the flows using the queue. Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.
pfifo_fast
The FIFO algorithm forms the basis for the default qdisc on all Linux network interfaces (pfifo_fast). It performs no shaping or rearranging of packets. It simply transmits packets as soon as it can after receiving and queuing them. This is also the qdisc used inside all newly created classes until another qdisc or a class replaces the FIFO.
A real FIFO qdisc must, however, have a size limit (a buffer size) to prevent it from overflowing in case it is unable to dequeue packets as quickly as it receives them. Linux implements two basic FIFO qdiscs, one based on bytes, and one on packets. Regardless of the type of FIFO used, the size of the queue is defined by the parameter limit. For a pfifo the unit is understood to be packets and for a bfifo the unit is understood to be bytes.
pie
PIE is designed to control delay effectively. First, an average dequeue rate is estimated based on the standing queue. The rate is used to calculate the current delay. Then, on a periodic basis, the delay is used to calculate the dropping probabilty. Finally, on arrival, a packet is dropped (or marked) based on this probability. PIE makes adjustments to the probability based on the trend of the delay i.e. whether it is going up or down.The delay converges quickly to the target value specified. alpha and beta are statically chosen parameters chosen to control the drop probability growth and are determined through control theoretic approaches. alpha determines how the deviation between the current and target latency changes probability. beta exerts additional adjustments depending on the latency trend. The drop probabilty is used to mark packets in ecn mode. However, as in RED, beyond 10% packets are dropped based on this probability. The bytemode is used to drop packets proportional to the packet size.
fq
A packet scheduler is charged with organizing the flow of packets through the network stack to meet a set of policy objectives. The kernel has quite a few of them, including CBQ for fancy class-based routing, CHOKe for routers, and a couple of variants on the CoDel queue management algorithm. FQ joins this list as a relatively simple scheduler designed to implement fair access across large numbers of flows with local endpoints while keeping buffer sizes down; it also happens to implement TCP pacing.
FQ keeps track of every flow it sees passing through the system. To do so, it calculates an eight-bit hash based on the socket associated with the flow, then uses the result as an index into an array of red-black trees. The data structure is designed, according to Eric, to scale well up to millions of concurrent flows. A number of parameters are associated with each flow, including its current transmission quota and, optionally, the time at which the next packet can be transmitted.
That transmission time is used to implement the TCP pacing support. If a given socket has a pace specified for it, FQ will calculate how far the packets should be spaced in time to conform to that pace. If a flow's next transmission time is in the future, that flow is added to another red-black tree with the transmission time used as the key; that tree, thus, allows the kernel to track delayed flows and quickly find the one whose next packet is due to go out the soonest. A single timer is then used, if needed, to ensure that said packet is transmitted at the right time.
The scheduler maintains two linked lists of active flows, the "new" and "old" lists. When a flow is first encountered, it is placed on the new list. The packet dispatcher services flows on the new list first; once a flow uses up its quota, that flow is moved to the old list. The idea here appears to be to give preferential treatment to new, short-lived connections — a DNS lookup or HTTP "GET" command, for example — and not let those connections be buried underneath larger, longer-lasting flows. Eventually the scheduler works its way through all active flows, sending a quota of data from each; then the process starts over.
There are a number of additional details, of course. There are limits on the amount of data queued for each flow, as well as a limit on the amount of data buffered within the scheduler as a whole; any packet that would exceed one of those limits is dropped. A special "internal" queue exists for high-priority traffic, allowing it to reach the wire more quickly. And so on.
One other detail is garbage collection. One problem with this kind of flow tracking is that nothing tells the scheduler when a particular flow is shut down; indeed, nothing can tell the scheduler for flows without local endpoints or for non-connection-oriented protocols. So the scheduler must figure out on its own when it can stop tracking any given flow. One way to do that would be to drop the flow as soon as there are no packets associated with it, but that would cause some thrashing as the queues empty and refill; it is better to keep flow data around for a little while in anticipation of more traffic. FQ handles this by putting idle flows into a special "detached" state, off the lists of active flows. Whenever a new flow is added, a pass is made over the associated red-black tree to clean out flows that have been detached for a sufficiently long time — three seconds in the current patch.
cake
The CAKE Principle:
(or, how to have your cake and eat it too)
This is a combination of several shaping, AQM and FQ
techniques into one easy-to-use package:
- An overall bandwidth shaper, to move the bottleneck away
from dumb CPE equipment and bloated MACs. This operates
in deficit mode (as in sch_fq), eliminating the need for
any sort of burst parameter (eg. token buxket depth).
Burst support is limited to that necessary to overcome
scheduling latency.
- A Diffserv-aware priority queue, giving more priority to
certain classes, up to a specified fraction of bandwidth.
Above that bandwidth threshold, the priority is reduced to
avoid starving other classes.
- Each priority class has a separate Flow Queue system, to
isolate traffic flows from each other. This prevents a
burst on one flow from increasing the delay to another.
Flows are distributed to queues using a set-associative
hash function.
- Each queue is actively managed by Codel. This serves
flows fairly, and signals congestion early via ECN
(if available) and/or packet drops, to keep latency low.
The codel parameters are auto-tuned based on the bandwidth
setting, as is necessary at low bandwidths.
The configuration parameters are kept deliberately simple
for ease of use. Everything has sane defaults. Complete
generality of configuration is not a goal.
The priority queue operates according to a weighted DRR
scheme, combined with a bandwidth tracker which reuses the
shaper logic to detect which side of the bandwidth sharing
threshold the class is operating. This determines whether
a priority-based weight (high) or a bandwidth-based weight
(low) is used for that class in the current pass.
This qdisc incorporates much of Eric Dumazet's fq_codel code,
customised for use as an integrated subordinate.
How to apply a packet scheduler:
1. Open terminal on your device
2. Use the "su" command to become root
3. Use tc to change the packet scheduler(qdisc) on your device. I have included an example below, the first line is for WiFi and the second for data. In the example, we are setting the qdisc to fq_pie, which is a mix of PIE with per flow rate shaping from fq.
Code:
tc qdisc add dev wlan0 root fq_pie
tc qdisc add dev rmnet_data0 root fq_pie
4. Confirm your packet scheduler has been applied by using the tc tool again. I have included an example below.
Code:
tc qdisc
To use another packet scheduler after applying a previous one, you will need to either reboot or remove the added qdisc from each interface using the command I have included below.
Code:
tc qdisc del root dev wlan0
tc qdisc del root dev rmnet_data0
Kylo is the bees knees!
Kylo has arrived! Cool
galaxys said:
Kylo has arrived! Cool
Click to expand...
Click to collapse
Just an early build with 260+ commits
Sent from my Nexus 6P using Tapatalk
Systemless root is also working by following the instructions! . I flashed the modified boot.img, following by Kylo, then SuperSU, and all is working on reboot.
Just to mention, System-less root with some file managers doesn't allow system partition to be set to RW.
Oh yeah, love seeing development so early already. Thank you.
Waiting on Chain fire to update his root kernel for MDB08M (angler)
Must say Matt this is smooth
chevycam94 said:
Waiting on Chain fire to update his root kernel for MDB08M (angler)
Click to expand...
Click to collapse
You could flash an older factory image
Aridon said:
Oh yeah, love seeing development so early already. Thank you.
Click to expand...
Click to collapse
I try to be fast
dabug123 said:
Must say Matt this is smooth
Click to expand...
Click to collapse
We will see how long it takes to get up to where I want it
On that note, R2 is up.
Also, here is a script to enable double tap to wake.
Code:
#!/system/bin/sh
sleep 30
echo 1 > /sys/devices/soc.0/f9924000.i2c/i2c-2/2-0070/input/input0/wake_gesture
What if I'm already rooted with chainfires img file running a custom ROM? I also have the new twrp 2.7.0.1 by the way.
stevew84 said:
What if I'm already rooted with chainfires img file running a custom ROM? I also have the new twrp 2.7.0.1 by the way.
Click to expand...
Click to collapse
You are fine to flash it over
Sent from my Nexus 6P using Tapatalk
DespairFactor said:
You are fine to flash it over
Sent from my Nexus 6P using Tapatalk
Click to expand...
Click to collapse
So I should flash the ROM again then your kernel and SU. I've heard you shouldn't flash custom kernels over each other.
stevew84 said:
So I should flash the ROM again then your kernel and SU. I've heard you shouldn't flash custom kernels over each other.
Click to expand...
Click to collapse
No, just flash it over, you should be fine
Sent from my Nexus 6P using Tapatalk
I'll giver her a shot Matt
Sent from my Nexus 6P using Tapatalk
DespairFactor said:
No, just flash it over, you should be fine
Sent from my Nexus 6P using Tapatalk
Click to expand...
Click to collapse
Message me on hangouts going to do some walls for kernel
I am so sorry guys, I just had to bump to R2.1 for a couple scheduler patches.
dabug123 said:
Message me on hangouts going to do some walls for kernel
Click to expand...
Click to collapse
What is your hangouts ID?
DespairFactor said:
What is your hangouts ID?
Click to expand...
Click to collapse
Paul clark