Discussion on GCC Optimizations - Samsung Galaxy Nexus

As pretty much everyone here is aware, there seems to be an obsession with using O3 for compiling binaries for this device. This obsession is probably due to the fact that O3 is the "most optimized" flag in GCC. The issue here is that all of these optimizations do not come without drawbacks.
Technically, due to the nature of the Galaxy Nexus as a mid-spec ARM-based device, we should be using Os to reduce the size of the code that needs to be run.
Also, there are many other drawbacks to O3, such as significantly larger binary size and possible instability, which is why it is not default in the Linux kernel. Binary size does not only impact the size on disk, but can also impact processing time of the code and the amount of space that the program takes in the CPU cache and RAM.
If somebody could please show actual benchmark data showing that O3 optimization actually is an improvement compared to O2 and Os on the Galaxy Nexus, I would really appreciate seeing why it is used on nearly every ROM and kernel.
Edit: I also just read up on Ofast, which disables some standard compliance by simplifying math. I wonder if this would cause any stability issues on the Galaxy Nexus. I'd really like to try -Os -ffast-math when I have time.

I'm sorry, I don't have the time to do that, but I can say this. In all the time I've spent tinkering with compiler optimisations, -O3 has rarely been worth it. Especially on a system like the OMAP 4460 which suffers more from IO bottlenecking than MIPS or FLOPS being the bottleneck. It seems Google's default is -O2 and they have guys who know things about compilers. I would be very curious about -Os though, since that's basically -O2 with the code-bloating features turned off. But I suspect there won't be a perceptible difference.
My guess (again, don't have time to test this) is that since 3 is a bigger nummer than 2, it's used by people who don't know precisely what it does, which seems to be the MO for a lot of people who create ROMs.

borizz27 said:
My guess (again, don't have time to test this) is that since 3 is a bigger nummer than 2, it's used by people who don't know precisely what it does, which seems to be the MO for a lot of people who create ROMs.
Click to expand...
Click to collapse
I also believe that this is the case... I've even seen a ROM on another phone "compiled with O4", which just uses O3 (anything >3 just sets the optimization level to 3)...

During my brief stint in writing patches for Gentoo Linux back when my desktop computer was slower than my phone is now, I read all kinds of weird stuff. People with --ffast-math on complaining that math was wrong, for example, or people on tiny systems calling for complete loop unroll.
The GCC website is quite clear in what the different -O levels do: http://gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Optimize-Options.html#Optimize-Options
I would find it very odd someone at Google hasn't had the same idea and actually tested the different Olevels. I'm guesing O2 is where it's at.

borizz27 said:
During my brief stint in writing patches for Gentoo Linux back when my desktop computer was slower than my phone is now, I read all kinds of weird stuff. People with --ffast-math on complaining that math was wrong, for example, or people on tiny systems calling for complete loop unroll.
The GCC website is quite clear in what the different -O levels do: http://gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Optimize-Options.html#Optimize-Options
I would find it very odd someone at Google hasn't had the same idea and actually tested the different Olevels. I'm guesing O2 is where it's at.
Click to expand...
Click to collapse
Sure, and somebody who worked on Debian, Redhat, etc. also decided on O2 and it had just become a standard for stable, production-ready c builds.
Theoretically, due to the lack of sufficient cache on tiny ARM chips like the omap4, we should try to keep minimal code size through something like Os.
Also, I am assuming that ffast-math has improved because of the inclusion of Ofast in gcc4.6.

I hope to do some testing after I sync AOSP and fix errors with GCC 4.8

MДЯCЦSДИT said:
Sure, and somebody who worked on Debian, Redhat, etc. also decided on O2 and it had just become a standard for stable, production-ready c builds.
Theoretically, due to the lack of sufficient cache on tiny ARM chips like the omap4, we should try to keep minimal code size through something like Os.
Also, I am assuming that ffast-math has improved because of the inclusion of Ofast in gcc4.6.
Click to expand...
Click to collapse
I don't know. The 4460's cache is 1MB. While not huge, it's not tiny by any stretch of the imagination. However, I'm looking forward to your results. We can all keep guessing about what's best, but hard data will be better.
As far as I know, --ffast-math wasn't improved -- it still cuts the same corners and breaks the standard in the same places. -Ofast just combines -O3 with all the standards breaking options like --ffast-math.

Related

[Q] Dual Core V. Single Core?

So with the new Dual Core phones coming out I'm wondering... What's all the hullabaloo?
I just finished reading the Moto Atrix review from Engadget and it sounds like crap. They said docking to the ridiculously priced webtop accessory was slow as shiz.
Anyone who knows better, please educate me. I'd like to know what is or will be offered that Dual Core will be capable of that our current gen phones will NOT be capable of.
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life. If we end up having more data intensive apps and Android becomes more powerful multi-core cpu's will help a lot also. Naturally Android will need to be broken down and revamped to utilize multiple cores to their full potential though. At some point I can see Google using more or merging a large part of the desktop linux kernel to help with that process.
At the rate Android (and smart phones in general) is progressing, someday we may see a 64bit OS on a phone, we will definitely need multi-core cpu's then. I know, it's a bit of a dream but it's probably not too elaborate.
KCRic said:
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life.
Click to expand...
Click to collapse
I'd really, REALLY like to know how you came to that particular conclusion. While a dual core might not eat through quite as much wattage as two single cores, one that takes less is pure snakeoil IMO. I have yet to see a dual core CPU that is rated lower than a comparable single core on the desktop. Why would this be different for phones?
Software and OSes that can handle a dual core CPU need additional CPU cycles to manage the threading this results in, so if anything, dual core CPUs will greatly, GREATLY diminish battery life.
The original posters question is valid. What the heck would one need dual core CPUs in phones for? Personally, I can't think of anything. Running several apps in parallel was a piece of cake way before dual CPUs and more power can easily be obtained through increasing the clock speed.
I'm not saying my parent poster is wrong, but I sure as heck can't imagine the physics behind his statement. So if I'm wrong, someone please enlighten me.
I can see dual cores offering a smoother user experience -- one core could be handling an audio stream while the other is doing phone crap. I don't see how it could improve battery life though....
The theory is that two cores can accomplish the same thing as a single core while only working half as hard, I've seen several articles stating that dual cores will help battery life. Whether that is true I don't know.
Sent from my T-Mobile G2 using XDA App
Kokuyo, while you do have a point about dual cores being overkill in a phone I remember long ago people saying "why would you ever need 2gb of RAM in a PC" or "who could ever fill up a 1tb hard drive."
Thing is wouldnt the apps themselves have to be made to take advantage of dual cores as well?
JBunch1228; The short-term answer is nothing. Same answer as the average joe asking what he needs a quad-core in his desktop for. Right now it seems as much a sales gimmick as anything else, since the only Android ver that can actually make use of it is HC. Kinda like the 4G bandwagon everyone jumped on, all marketing right now.
Personally, I;d like to se what happens with the paradigm the Atrix is bringing out in a year or so. Put linux on a decent sized SSD for the laptop component, and use the handset for processing and communications exclusivley, rather than try and use the 'laptop dock' as nothing more than an external keyboard
As far as battery life, I can see how dual-cores could affect it positively, as a dual core doesnt pull as much power as two individual cores, and, if the chip is running for half as long as a single core would for the same operation, that would give you better batt life. Everyone keep in mind I said *if*. I don't see that happening before Q4, since the OS and apps need to be optimized for it.
My $.02 before depreciation.
Then there are the rumors of mobile quad-cores from Nvidia by Q4 as well. I'll keep my single core Vision, and see whats out there when my contract ends. We may have a whole new world.
KCRic said:
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life. If we end up having more data intensive apps and Android becomes more powerful multi-core cpu's will help a lot also. Naturally Android will need to be broken down and revamped to utilize multiple cores to their full potential though. At some point I can see Google using more or merging a large part of the desktop linux kernel to help with that process.
Click to expand...
Click to collapse
Wow, that's complete nonsense.
You can't add parts and end up using less power.
Also, Android needs no additional work to support multiple cores. Android runs on the LINUX KERNEL, which is ***THE*** choice for multi-core/multi-processor supercomputers. Android applications run each in their own process, the linux kernel then takes over process swapping. Android applications also are *already* multi-threaded (unless the specific application developer was a total newb).
At the rate Android (and smart phones in general) is progressing, someday we may see a 64bit OS on a phone, we will definitely need multi-core cpu's then. I know, it's a bit of a dream but it's probably not too elaborate.
Click to expand...
Click to collapse
What's the connection? Just because the desktop processor manufacturers went multi-core and 64bit at roughly the same time doesn't mean that the two are even *slightly* related. Use of a 64bit OS on a phone certainly does ***NOT*** somehow require that the processor be multi-core.
dhkr234 said:
Wow, that's complete nonsense.
You can't add parts and end up using less power.
Also, Android needs no additional work to support multiple cores. Android runs on the LINUX KERNEL, which is ***THE*** choice for multi-core/multi-processor supercomputers. Android applications run each in their own process, the linux kernel then takes over process swapping. Android applications also are *already* multi-threaded (unless the specific application developer was a total newb).
What's the connection? Just because the desktop processor manufacturers went multi-core and 64bit at roughly the same time doesn't mean that the two are even *slightly* related. Use of a 64bit OS on a phone certainly does ***NOT*** somehow require that the processor be multi-core.
Click to expand...
Click to collapse
The connection lies in the fact that this is technology we're talking about. It continually advances and does is at a rapid rate. No where in it did I say we'll make that jump 'at the same time'. Linux is not ***THE*** choice for multi-core computers, I use Sabayon but also Win7 seems to do just fine with multiple cores. Android doesn't utilize multi-core processors to their full potential and also uses a modified version of the linux kernel (which does fully support multi-core systems), that's whay I made the statement about merging. Being linux and being based on linux are not the same thing. Think of iOS or OSX - based on linux but tell me, how often do linux instuctions work for a Mac?
"you can't add parts and use less power", the car industry would like you clarify that, along with the computer industry. 10 years ago how much energy did electronics use? Was the speed and power vs. power consumption ratio better than it is today? No? I'll try to give an example that hopefully explains why consumes less power.
Pizza=data
People=processors
Time=heat and power consumption
1 person takes 20 minutes to eat 1 whole pizza while 4 people take only 5 minutes. That one person is going to have to work harder and longer in order to complete the same task as the 4 people. That will use more energy and generate much more heat. Heat, as we know, causes processors to become less efficient which means more energy is wasted at the higher clock cycles and less information processed per cycle.
It's not a very technical explanation of why a true multi-core system uses less power but it will have to do. Maybe ask NVidia too since they stated the Tegra processors are more power efficient.
KCRic said:
The connection lies in the fact that this is technology we're talking about. It continually advances and does is at a rapid rate. No where in it did I say we'll make that jump 'at the same time'. Linux is not ***THE*** choice for multi-core computers, I use Sabayon but also Win7 seems to do just fine with multiple cores.
Click to expand...
Click to collapse
Show me ***ONE*** supercomputer that runs wondoze. I DARE YOU! They don't exist!
Android doesn't utilize multi-core processors to their full potential and also uses a modified version of the linux kernel (which does fully support multi-core systems), that's whay I made the statement about merging. Being linux and being based on linux are not the same thing.
Click to expand...
Click to collapse
??? No, being LINUX and GNU/LINUX are not the same. ANDROID ***IS*** LINUX, but not GNU/LINUX. The kernel is the kernel. The modifications? Have nothing to do with ANYTHING this thread touches on. The kernel is FAR too complex for Android to have caused any drastic changes.
Think of iOS or OSX - based on linux but tell me, how often do linux instuctions work for a Mac?
Click to expand...
Click to collapse
No. Fruitcakes does NOT use LINUX ***AT ALL***. They use MACH. A *TOTALLY DIFFERENT* kernel.
"you can't add parts and use less power", the car industry would like you clarify that, along with the computer industry. 10 years ago how much energy did electronics use? Was the speed and power vs. power consumption ratio better than it is today? No? I'll try to give an example that hopefully explains why consumes less power.
Click to expand...
Click to collapse
Those changes are NOT RELATED to adding cores, but making transistors SMALLER.
Pizza=data
People=processors
Time=heat and power consumption
1 person takes 20 minutes to eat 1 whole pizza while 4 people take only 5 minutes. That one person is going to have to work harder and longer in order to complete the same task as the 4 people. That will use more energy and generate much more heat. Heat, as we know, causes processors to become less efficient which means more energy is wasted at the higher clock cycles and less information processed per cycle.
It's not a very technical explanation of why a true multi-core system uses less power but it will have to do. Maybe ask NVidia too since they stated the Tegra processors are more power efficient.
Click to expand...
Click to collapse
You have come up with a whole lot of nonsense that has ABSOLUTELY NO relation to multiple cores.
Energy consumption is related to CPU TIME.
You take a program that takes 10 minutes of CPU time to execute on a single-core 3GHz processor, split it between TWO otherwise identical cores operating at the SAME FREQUENCY, add in some overhead to split it between two cores, and you have 6 minutes of CPU time on TWO cores, which is 20% *MORE* energy consumed on a dual-core processor.
And you want to know what NVIDIA will say about their bloatchips? It uses less power than *THEIR* older hardware because it has **SMALLER TRANSISTORS** that require less energy.
Don't quite your day job, computer engineering is NOT YOUR FORTE.
dhkr234 said:
Show me ***ONE*** supercomputer that runs wondoze. I DARE YOU! They don't exist!
??? No, being LINUX and GNU/LINUX are not the same. ANDROID ***IS*** LINUX, but not GNU/LINUX. The kernel is the kernel. The modifications? Have nothing to do with ANYTHING this thread touches on. The kernel is FAR too complex for Android to have caused any drastic changes.
No. Fruitcakes does NOT use LINUX ***AT ALL***. They use MACH. A *TOTALLY DIFFERENT* kernel.
Those changes are NOT RELATED to adding cores, but making transistors SMALLER.
You have come up with a whole lot of nonsense that has ABSOLUTELY NO relation to multiple cores.
Energy consumption is related to CPU TIME.
You take a program that takes 10 minutes of CPU time to execute on a single-core 3GHz processor, split it between TWO otherwise identical cores operating at the SAME FREQUENCY, add in some overhead to split it between two cores, and you have 6 minutes of CPU time on TWO cores, which is 20% *MORE* energy consumed on a dual-core processor.
And you want to know what NVIDIA will say about their bloatchips? It uses less power than *THEIR* older hardware because it has **SMALLER TRANSISTORS** that require less energy.
Don't quite your day job, computer engineering is NOT YOUR FORTE.
Click to expand...
Click to collapse
If you think that its just a gimmick or trend then why does every laptop manufacturer use dual core or more and have better battery life than the old single core? Sometimes trends do have more use than aesthetic appeal. Your know-it-all approach is nothing new around here and you're not the only person who works in IT around. Theories are one thing but without any proof when ALL current tech says otherwise... makes you sound like a idiot. Sorry...
I bet I can pee further
Sent from my HTC Vision using XDA App
zaelia said:
I bet I can pee further
Sent from my HTC Vision using XDA App
Click to expand...
Click to collapse
The smaller ones usually can, I think it has to do with the urethra being more narrow as to allow a tighter, further shooting stream.
Sent from my HTC Glacier using XDA App
TJBunch1228 said:
The smaller ones usually can, I think it has to do with the urethra being more narrow as to allow a tighter, further shooting stream.
Sent from my HTC Glacier using XDA App
Click to expand...
Click to collapse
Well, you would know
sino8r said:
Well, you would know
Click to expand...
Click to collapse
It might be short but it sure is skinny.
Sent from my HTC Glacier using XDA App
sino8r said:
If you think that its just a gimmick or trend then why does every laptop manufacturer use dual core or more and have better battery life than the old single core? Sometimes trends do have more use than aesthetic appeal. Your know-it-all approach is nothing new around here and you're not the only person who works in IT around. Theories are one thing but without any proof when ALL current tech says otherwise... makes you sound like a idiot. Sorry...
Click to expand...
Click to collapse
+1
I was comparing speeds on the Atrix compared to the [email protected] and they matched. The Atrix was much more efficient on heat and probably with battery. The dual cores will use less power because the two cores will be better optimized for splitting the tasks and will use half the power running the same process as the single core because the single core runs at the same voltages for a single core compared to splitting it between two. Let's not start a flame war and make personal attacks on people
Sent from my HTC Vision with Habanero FAST 1.1.0
It is disturbing that there are people out there who can't understand this VERY BASIC engineering.
Voltage, by itself, has NO MEANING. You are forgetting about CURRENT. POWER = CURRENT x VOLTAGE.
Battery drain is DIRECTLY PROPORTIONAL to POWER. Not voltage. Double the voltage and half the current, power remains the same.
Dual core does NOT increase battery life. It increases PERFORMANCE by ***DOUBLING*** the physical processing units.
Battery life is increased through MINIATURIZATION and SIMPLIFICATION, which becomes *EXTREMELY* important as you increase the number of physical processing units.
It is the epitome of IGNORANCE to assume that there is some relation when there is not. The use of multiple cores relates to hard physical limitations of the silicon. You can't run the silicon at 18 GHz! Instead of racing for higher frequencies, the new competition is about how much work you can do with the SAME frequency, and the ***EASIEST*** way to do this is to bolt on more cores!
For arguments sake, take a look at a couple of processors;
Athlon II X2 240e / C3.... 45 watt TDP, 45 nm
Athlon II X4 630 / C3.... 95 watt TDP, 45 nm
Same stepping, same frequency (2.8 GHz), same voltage, same size, and the one with twice the cores eats more than twice the power. Wow, imagine that!
The X4 is, of course, FASTER, but not by double.
Now lets look at another pair of processors;
Athlon 64 X2 3800+ / E6.... 89 watt TDP, 90 nm
Athlon II X2 270u / C3.... 25 watt TDP, 45 nm
Different stepping, SAME frequency (2.0 GHz), same number of cores, different voltage, different SIZE, WAY different power consumption. JUST LOOK how much more power the older chip eats!!! 3.56 times as much. Also note that other power management features exist on the C3 that didn't exist on the E6, so the difference in MINIMUM power consumption is much greater.
Conclusion: There is no correlation between a reduction in power consumption and an increase in the number of PPUs. More PPUs = more performance. Reduction in power consumption is related to size, voltage, and other characteristics.
dhkr234 said:
Don't quite your day job, computer engineering is NOT YOUR FORTE.
Click to expand...
Click to collapse
Good job on being a douche. I didn't insult you in anything I said and if you disagree over my perspective then state it otherwise shut up. I didn't tell you english grammar isn't your forte so maybe you should keep your senile remarks to yourself.
You seem to want to argue over a few technicalities and I'll admit, I don't have a PhD in computer engineering but then again I doubt you do either. For the average person to begin to understand the inner-workings of a computer requires you to set aside the technical details and generalize everything. When they read about a Mac, they will see the word Unix which also happens to appear in things written about Linux and would inevitably make a connection about both being based off of the same thing (which they are). In that sense, I'm correct - you're wrong. The average person doesn't differentiate between 'is' and 'based off', most people take them in the same context.
So I may be wrong in some things when you get technical but when you're talking to the average person that thinks the higher the CPU core clock is = the better the processor, you end up being wrong because they won't give a damn about the FSB or anything else. Also, when you start flaming people and jumping them over insignificant things you come off as a complete douche. If I'm wrong on something then tactfully and politely correct me - don't try to act like excerebrose know-it-all. Let's not even mention completely going off track about about Windoze, servers aren't the only things that have multi-core processors.
I'm sure you'll try to multi-quote me with a slew of unintelligent looking, lame comebacks and corrections but in the end you'll just prove my point about the type of person you are. ****The End****
KCRic said:
Good job on being a douche. I didn't insult you in anything I said and if you disagree over my perspective then state it otherwise shut up. I didn't tell you english grammar isn't your forte so maybe you should keep your senile remarks to yourself.
Click to expand...
Click to collapse
Agreeing or disagreeing is pointless when discussing FACTS. Perspective has nothing to do with FACTS. You can think whatever you like, but it doesn't make you right.
You seem to want to argue over a few technicalities and I'll admit, I don't have a PhD in computer engineering but then again I doubt you do either.
Click to expand...
Click to collapse
Common mistake, assuming that everybody is the same as you. Try not to make that assumption again.
For the average person to begin to understand the inner-workings of a computer requires you to set aside the technical details and generalize everything.
Click to expand...
Click to collapse
Generalizations lead to inaccuracies. You do not teach by generalizing, you teach by starting from the bottom and building a foundation of knowledge. Rene Descartes (aka Renatus Cartesius, as in Cartesian geometric system, as in the father of analytical geometry) said that the foundation of all knowledge is that doubting one's own existence is itself proof that there is someone to doubt it -- "Cogito Ergo Sum" -- "I think therefore I am". Everything must begin with this.
When they read about a Mac, they will see the word Unix which also happens to appear in things written about Linux and would inevitably make a connection about both being based off of the same thing (which they are). In that sense, I'm correct - you're wrong. The average person doesn't differentiate between 'is' and 'based off', most people take them in the same context.
Click to expand...
Click to collapse
... and need to be CORRECTED for it. The two kernels (the only components relevant to this discussion) are completely different! MACH is a MICRO kernel, Linux is a MONOLITHIC kernel. Superficial characteristics (which are OUTSIDE of the kernel) be damned, they are NOT the same thing and thinking that they are is invalid. The average person is irrelevant, FACTS are FACTS.
So I may be wrong in some things when you get technical but when you're talking to the average person that thinks the higher the CPU core clock is = the better the processor, you end up being wrong because they won't give a damn about the FSB or anything else.
Click to expand...
Click to collapse
So are you trying to tell me that IGNORANCE is BLISS? Because "giving a damn" or not has NO BEARING on reality. The sky is blue. You think that its purple and don't give a damn, does that make it purple? No, it does not.
Also, when you start flaming people and jumping them over insignificant things you come off as a complete douche. If I'm wrong on something then tactfully and politely correct me - don't try to act like excerebrose know-it-all. Let's not even mention completely going off track about about Windoze, servers aren't the only things that have multi-core processors.
Click to expand...
Click to collapse
Right, servers AREN'T the only thing running multi-core processors, but did you not read where I SPECIFICALLY said **SERVERS**? Wondoze is off track and UNRELATED. I brought up servers because THEY USE THE SAME KERNEL AS ANDROID. If a supercomputer uses Linux, do you not agree that Linux is CLEARLY capable of multiprocessing well enough to meet the needs of a simple phone?
I'm sure you'll try to multi-quote me with a slew of unintelligent looking, lame comebacks and corrections but in the end you'll just prove my point about the type of person you are. ****The End****
Click to expand...
Click to collapse
... perfectionist, intelligent, PATIENT in dealing with ignorance. And understand that ignorance is not an insult when it is true, and contrary to common "belief", does NOT mean stupid. Learn the facts and you will cease to be ignorant of them.
So hopefully this train can be put back on the tracks...
From what I am understanding from more technical minded individuals, Dual Core should help with battery life because it requires less power to run the same things as single core. It can then probably be extrapolated that when pushed, Dual Core will be able to go well above and beyond its Single Core brethren in terms of processing power.
For now, it appears the only obvious benefit will be increased battery life and less drain on the processor due to overworking. Hopefully in the near future more CPU and GPU intensive processes are introduced to the market which will fully utilize the Dual Core's potential in the smartphone world. Thanks for all the insight.
dhkr234 - *slaps air high-five*

Has anyone tried the V6 Supercharger script?

http://forum.xda-developers.com/showthread.php?t=991276
Found that a couple days ago and was wondering if anyone has tried it or if most developers use something similar as a base.
I've tried it with Jermaine151's Minimalist ROM...and honestly I really don't notice any differences with how things run. My cousin, who has the OG Droid, swears by it.
localceleb said:
http://forum.xda-developers.com/showthread.php?t=991276
Found that a couple days ago and was wondering if anyone has tried it or if most developers use something similar as a base.
Click to expand...
Click to collapse
Kernel devs use a released kernel as a base, not a script. A kernel dev worth his salt has already made memory tweaks according to how he feels the system should run. I'm not sure what that script is supposed to fix or how lag and slow draw times are related to memory. Motorola devices are seriously challenged in that department, but HTC's are some of the best out there. I'm not sure where he came up with the idea that no-op IO queuing was as good as deadline. That's couldn't be more wrong. That also makes me question his other tweaks. Basically, if you're running a custom kernel, especially something like the seriously modified ziggy or AOSP kernels, I'd avoid this.
RMarkwald said:
I've tried it with Jermaine151's Minimalist ROM...and honestly I really don't notice any differences with how things run. My cousin, who has the OG Droid, swears by it.
Click to expand...
Click to collapse
Did you get the feeling it was really aimed at Motorola devices, too?
loonatik78 said:
Did you get the feeling it was really aimed at Motorola devices, too?
Click to expand...
Click to collapse
Yeah I did. I kinda figured that running a custom ROM/kernel combo would be ideal instead of running that script, just because the devs have tweaked/maximized their performance already.
I don't know if that's the case with a Moto device, as I have the Xoom but never owned the phone. My cousin swears by it on his OG Droid, and pointed it out to me.
Like you said, I think with HTC devices you won't really notice much, if any differences. I haven't personally.
RMarkwald said:
Yeah I did. I kinda figured that running a custom ROM/kernel combo would be ideal instead of running that script, just because the devs have tweaked/maximized their performance already.
I don't know if that's the case with a Moto device, as I have the Xoom but never owned the phone. My cousin swears by it on his OG Droid, and pointed it out to me.
Like you said, I think with HTC devices you won't really notice much, if any differences. I haven't personally.
Click to expand...
Click to collapse
Well, like I said, I'm suspect simply because he thinks no-op is a good IO scheduler. It kinda tells me he doesn't know his ass from a hole in the ground. No-op assumes mechanical storage mediums. Deadline is MUCH better for solid state storage, and all the rest of the schedulers are improvements that may or may not be better, depending on how you use your device.
I know deadline is better...but not all phones have deadline available.
I'm no guru on the topic and I go by what I read and read threads about the topic.
I don't take credit for the non-supercharger tweaks as they aren't mine and include links that I used as resources.
Yes I do say noop or deadline would be preferred over other common options on most devices. But the main thing is that both would be faster than what most people have configured - cfq.
btw, this thread seemed convincing that noop is performs very well on android
http://forum.xda-developers.com/showthread.php?t=948001
So forgive me for not consulting you first.
Oh, also...
NOOP scheduler is best used with solid state devices such as flash memory or in general with devices that do not depend on mechanical movement to access data (meaning typical "hard disk" drive technology consisting of seek time primarily, plus rotational latency). Such non-mechanical devices do not require re-ordering of multiple I/O requests, a technique that groups together I/O requests that are physically close together on the disk, thereby reducing average seek time and the variability of I/O service time.[2]
Click to expand...
Click to collapse
http://en.wikipedia.org/wiki/Noop_scheduler
But then again, you probably would have given me wrong information.
PS. So it seems that you're not all the bright either
zeppelinrox said:
I know deadline is better...but not all phones have deadline available.
I'm no guru on the topic and I go by what I read and read threads about the topic.
I don't take credit for the non-supercharger tweaks as they aren't mine and include links that I used as resources.
Yes I do say noop or deadline would be preferred over other common options on most devices. But the main thing is that both would be faster than what most people have configured - cfq.
btw, this thread seemed convincing that noop is performs very well on android
http://forum.xda-developers.com/showthread.php?t=948001
So forgive me for not consulting you first.
Oh, also...
http://en.wikipedia.org/wiki/Noop_scheduler
But then again, you probably would have given me wrong information.
PS. So it seems that you're not all the bright either
Click to expand...
Click to collapse
Yes. I'm aware of all that. The shortcoming of no-op is that it doesn't take into account demands of data. Some data should be fetched before other data, thus requiring a re-ordering of requests. It is best used on solid-state devices, but not THE best choice for them, necessarily. One might assume no-op to be roughly on par with deadline if some course assumptions about solid-state are made. No-op would be ideal on the SCSI5 array I built because a storage subsystem controller is re-ordering the data and caching it to memory. The latency times on that memory are exceeding low, as are write operations. In fact, the controller would send write-confirm commands back to the system even before data was actually written to disk to allow for more I/O operations. NAND solid-state is a different creature though. Read latency is certainly fast, however write latency is much, much slower in comparison. Because of this, simply throwing read and write commands at the storage subsystem wastes a lot of time since reads must wait on lengthy writes. Deadline holds significant advantage over no-op because it will suspend writes operations to ensure read request deadlines are met. In short, it mitigates some of the shortfall that comes with the NANDs lengthy write times. That's why I say it's significantly better.
I see.
That's very informative indeed.
Perhaps I can determine of a user has deadline available and if so, use that and if not, use noop
zeppelinrox said:
I see.
That's very informative indeed.
Perhaps I can determine of a user has deadline available and if so, use that and if not, use noop
Click to expand...
Click to collapse
I've never professed to be a software guy by any stretch of the imagination. I will never be able to do what you and many, many others do with software, code, Linux, scripting... and of that. I just don't understand that stuff. I'm very much the hardware geek though. Column address strobes? Sense amps? I get that kind of talk. Looking at a script? It might as well be in Sumerian.
zeppelinrox said:
I know deadline is better...but not all phones have deadline available.
I'm no guru on the topic and I go by what I read and read threads about the topic.
I don't take credit for the non-supercharger tweaks as they aren't mine and include links that I used as resources.
Yes I do say noop or deadline would be preferred over other common options on most devices. But the main thing is that both would be faster than what most people have configured - cfq.
btw, this thread seemed convincing that noop is performs very well on android
http://forum.xda-developers.com/showthread.php?t=948001
So forgive me for not consulting you first.
Oh, also...
http://en.wikipedia.org/wiki/Noop_scheduler
But then again, you probably would have given me wrong information.
PS. So it seems that you're not all the bright either
Click to expand...
Click to collapse
I forgot... I also wanted to explain that XDA thread a little.
It looks like he's basing his times on sequential writes, reads, then erases. Were that similar to real world use, his results would be valid, but seldom does anything work like that. Without knowing the specific IC of the flash drive in use, certain features of it can only be guessed at. Unless it's a virgin device or written to 0's, chances are writes are also going to come in conjunction with erases, which are a completely different process, and just as lengthy as writes. Because of how NAND works, larger erases and writes can be accomplished much faster by the address block rather than on a file by file basis. This is because NAND can only address blocks at a time. It has no random access capability for pages or bytes, either reading or writing. In short, his test demonstrates an ideal circumstance, one that is RARELY the circumstance of the real world.
I ran his V6 Supercharger script, Kick Ass Kernel Tweaks, and the 3G turbocharger tweaks in build.prop....
noticed NO difference whatsoever...
Actually, that isn't true. The one thing that seemed to work was the part about making the home launcher "hard to kill", that actually seemed to work. I was having issues with Sense restarting itself, and this alleviated that issue. I noticed that even if I tried to manually stop sense, I couldn't kill it. Two weeks later I removed all trace of the V6 scripts and haven't had any issues since, no idea why, although now I can manually kill the Sense process and have it restart again.
Anyways... no, there was no performance increase whatsoever.
bast525 said:
I ran his V6 Supercharger script, Kick Ass Kernel Tweaks, and the 3G turbocharger tweaks in build.prop....
noticed NO difference whatsoever...
Actually, that isn't true. The one thing that seemed to work was the part about making the home launcher "hard to kill", that actually seemed to work. I was having issues with Sense restarting itself, and this alleviated that issue. I noticed that even if I tried to manually stop sense, I couldn't kill it. Two weeks later I removed all trace of the V6 scripts and haven't had any issues since, no idea why, although now I can manually kill the Sense process and have it restart again.
Anyways... no, there was no performance increase whatsoever.
Click to expand...
Click to collapse
I was curious as to the Sense launcher restarting itself, and if this would fix/have an impact on that issue. Thanks for the info!
anyone that gave loonatik a hard time on this site seriously needs to reconsider, this guy knows his stuff.
loonatik78 said:
I forgot... I also wanted to explain that XDA thread a little.
It looks like he's basing his times on sequential writes, reads, then erases. Were that similar to real world use, his results would be valid, but seldom does anything work like that. Without knowing the specific IC of the flash drive in use, certain features of it can only be guessed at. Unless it's a virgin device or written to 0's, chances are writes are also going to come in conjunction with erases, which are a completely different process, and just as lengthy as writes. Because of how NAND works, larger erases and writes can be accomplished much faster by the address block rather than on a file by file basis. This is because NAND can only address blocks at a time. It has no random access capability for pages or bytes, either reading or writing. In short, his test demonstrates an ideal circumstance, one that is RARELY the circumstance of the real world.
Click to expand...
Click to collapse
I did think that the noop result was too good in relation to the other results.
I would not think that it was that much faster (since deadline is better) so I was somewhat surprised.
Thanks for elaborating
On the Sensation, the Supercharger script gave me no speed increase. It did fix my launcher redraws though. The only thing this script helped me with is multitasking, switching between apps constantly was smoother, and more often they didn't get killed by Android's task manager.
On the Nexus S though, it did make a good amount of difference in the operation of the phone.
I think, on the faster phones, like the SGS2 and the Sensation, it only helps with multitasking..
Well I initially came up with it because of the launcher redraw issue... everything else was gravy... well... a whole lotta gravy...
I don't think SuperSmoother sounds as cool so I'll stick with SuperCharger anyway lol

What is all this Linaro stuff?

Hey all,
Can someone explain what this Linaro thing I hear about is, and what it means to us Nexus owners? Is it just something that is incorporated into the kernel, our the ROM?
Thanks in advance
Swiped on my Gnex
It is, in simple terms, really optimized code.
"Long is the way, and hard, that out of hell leads up to light."
yarly said:
Linaro is basically some compiler optimizations and tweaks. It turns off some strict checking the compiler normally does so it can use a previously unavailable mode of optimization during the process that converts the programing language into machine readable code (basically what a compiler does for those that didn't know). Any performance increase is in tasks that the CPU does and those are much more limited on Android 4.0 than previously. It's not going to make your games run faster and if it does much of anything, it *might* make a few things that are not already cached (stored) in memory load a little faster, but that's rather subjective as of now.
The Linaro team's demo benchmarks that were eaten up by the Android linkbait blogs and the community as a whole were also misleading. They showed framerates at double what they were normally, but this was only due to their benchmarks doing software rendering (thus using the CPU) and not capped at 30fps because on the non-linaro toolchain, it uses double buffering with gpu rendering combined with vertical sync (vsync). PC gamers might know the term from triple buffering (to avoid the latency [lag] issues caused with using vsync) where you're capped at 60fps while using vsync due to staying in sync with the display refresh rate (60hz). The only performance it might do for graphics is where something is still using software rending on Android 4.0, which isn't too many areas.
Someone is bound to read this though and say, "But yarly, isn't 60fps better than 30fps so we should disable GPU rendering right?" No, lol. GPU handles graphics much more efficiently than the CPU ever could, which means the CPU is way over-tasked when it has to deal with them. That means it spends time doing graphics when it should be reading/writing to files, handling physics and dealing with memory. If software (CPU) rendering were better, then there would be no opengl and no directx. Not to mention the framerates on hardware (GPU) rendering would kick the **** out of the software rendering if it were unthrottled from vsync (which is not a good idea to do either).
In short, linaro is mostly over-hyped and performance increases from it so minimal (and maybe specious) and far between that no one will be able to point and say, "Yes, this part right here when I'm using my phone is running faster due to linaro!" Should developers not use it? If they can, why not, but it's not some holy grail that will make Android trounce every other mobile OS out there on performance.
Click to expand...
Click to collapse
Great write-up.
Linaro can improve app performance about 20% and maybe 100% with app have vsync.
I try a linaro build but can't see different perfomance on launcher and game
meminiau said:
Hey all,
Can someone explain what this Linaro thing I hear about is, and what it means to us Nexus owners? Is it just something that is incorporated into the kernel, our the ROM?
Thanks in advance
Swiped on my Gnex
Click to expand...
Click to collapse
Linaro is an enhanced version of Linux. Linaro was created by cleaning up all the errors in the code that Google didn't apparently didn't want to make perfect. That is what linaro is.
An enhanced version of Linux that was simply cleaned up and recoded.
Thanks guys for the comments!
So Linaro is used in both kernels and ROMs, right?
I have tried lots of ROMs with my Nexus, and keep going back to Blackice. It doesn't seem to be Linaro 'optimised', so what ROMs are?
I saw a thread regarding Franko's Kernel and an offshoot being Linaro optimised, so I will look into that, coz I am already an avid Franko user. Just want to find a ROM that is also optimised this way to try...
any suggestions?
DLD511 said:
Linaro is an enhanced version of Linux. Linaro was created by cleaning up all the errors in the code that Google didn't apparently didn't want to make perfect. That is what linaro is.
An enhanced version of Linux that was simply cleaned up and recoded.
Click to expand...
Click to collapse
Thats not even close to what Linaro is.
adrynalyne said:
Thats not even close to what Linaro is.
Click to expand...
Click to collapse
I don't mean to be rude, but is there any point in your post? If what is being said is not what you say it is, would you mind sharing what Linaro actually is, seeing as that is part of the purpose of this thread?
meminiau said:
I don't mean to be rude, but is there any point in your post? If what is being said is not what you say it is, would you mind sharing what Linaro actually is, seeing as that is part of the purpose of this thread?
Click to expand...
Click to collapse
The reason I posted was to let the poster know that was NOT what Linaro is. The other posts covered it.
I don't appreciate your attitude. Anyone who starts a post with "I dont mean to be rude" fully intends to be rude.
adrynalyne said:
The reason I posted was to let the poster know that was NOT what Linaro is. The other posts covered it.
I don't appreciate your attitude. Anyone who starts a post with "I dont mean to be rude" fully intends to be rude.
Click to expand...
Click to collapse
I just would have appreciated it if at the same time clarifying what isn't correct, that you could have did the same with what was correct.
There is so much info floating around regarding everything, and when someone pipes in and says something is not true, that can make it a little hard to work out what is fact and what isn't.
If you had done this, it would have been a lot more helpful that just a couple words saying someone was wrong. But thanks for clearing up what your thoughts were; that I do appreciate

Will doing a factory reset and disabling encryption speed up the phone at all?

I've heard that doing so alleviates some lag that people have been complaining about. Any truth to this?
First of all, please note that disabling decryption requires root, which also requires doing a factory reset. While disabling encryption will certainly speed up the phone, Google has improved encryption in Android Marshmallow so that it doesn't decrease performance as much as it did in Lollipop. You may not notice the difference. As for a factory reset, it will certainly get rid of any lag caused by any changes you made to the phone, but not any caused by Android.
Sent from my Nexus 5X using Tapatalk
Root and decryption are two different things. It seems you can have either or both. I'm curious to know if decryption carries a better experience myself also. I don't really care about benchmarks, mostly if it eliminates lag.
Every post I've read has said that they haven't noticed a difference between decrypted and encrypted with the 5X - though I haven't seen any benchmarks comparing. If it helps, the Ars Technica review shows how the I/O performance compares to previous phones. (3rd graph set in the Performance section)
I'm not sure whether it is actually a Marshmallow specific feature or not, but the 5X and the 6P are using the cryptography extensions that are part of the ARMv8 instruction set to perform encryption and decryption. The performance hit should be negligible.
Everyone clearly remembers the bad rep the N6 has for this, but it just didn't have proper support for this feature, though it apparently got a bit better later on. Right now it seems like jumping at ghosts for the 5X & 6P.
OP, which android build are you on? I'm wondering if the I build makes a difference. At least one person has returned their phone due to the lag, and had a replacement that didn't have that issue.
i was experiencing random lag with my n5x and I ended decrypting the phone and disabling zram and it made a big difference.
Before doing this, my phone was noticeably laggier and slower than my nexus 6 (decrypted). After decrypting and disabling zram, my n5x is now just as fast as my n6.
I did a speed test like those youtube videos, where you open apps at the same time and see which one finishes first, and now the n6 and n5x both finish opening apps almost exactly at the same time.
My build is mda89e. I don't have any noticeable lag, I was just curious if it would change anything.
How would you characterize the lag if it were present?
I decrypted and rooted, did not notice any difference in daily use. (dont care for benchmarks) I did however notice that the phone boots much faster after decryption.
dwang said:
i was experiencing random lag with my n5x and I ended decrypting the phone and disabling zram and it made a big difference.
Before doing this, my phone was noticeably laggier and slower than my nexus 6 (decrypted). After decrypting and disabling zram, my n5x is now just as fast as my n6.
I did a speed test like those youtube videos, where you open apps at the same time and see which one finishes first, and now the n6 and n5x both finish opening apps almost exactly at the same time.
Click to expand...
Click to collapse
How'd you disable zram?
lysm bre said:
How'd you disable zram?
Click to expand...
Click to collapse
I'd like to know this as well
Use trickster mod or kernel auditor to disable it.
Sent from my Nexus 5X using Tapatalk
Hi
Disabling ZRAM will wear out your flash memory quicker, the whole point of ZRAM is to speed up the phone and protect flash memory from hundreds or thousands of tiny write operations.
From the Wiki (https://en.wikipedia.org/wiki/Zram)
zram increases performance by avoiding paging to disk and using a compressed block device in RAM instead, inside which paging takes place until it is necessary to use the swap space on a hard disk drive. Since using RAM is an alternative way to provide swapping on RAM, zram allows Linux to make more use of RAM when swapping/paging is required, especially on older computers with less RAM installed.[1][2]
Even when the cost of RAM is low, zram still offers advantages for low-end hardware devices such as embedded devices and netbooks. Such devices usually use flash-based storage that has limited lifespan due to its nature, which is also used to provide swap space. The reduction in swap usage as a result of using zram effectively reduces the amount of wear placed on such flash-based storage, resulting in prolonging its usable life. Also, using zram results in a significantly reduced I/O for Linux systems that require swapping.[3][4]
Click to expand...
Click to collapse
Decryption doesn't make much difference (it will speed up boot times if you had a power on password, but that is simply because it is booting twice to offer us a protected Android environment first to get the password, and this was optional anyway, we get the choice during setup). The whole phone isn't encrypted anyway, just user data, hence overall the difference between encrypted and decrypted isn't that wide.
Unless we have some evidence of the speed up, I'm tempted to put down any suggestion of speed up down to the placebo effect :laugh:
If there is an improvement, It might be as simple as a factory reset is good for the phone due to some optimization undertaken with the flash memory at that time, or recompiling apps, that has been skipped when loading the device with an image at the factory. Perhaps that is why some people are seeing no problems with lag because they've played about first and had a mess around, then did a factory reset at some point to set up their device up as a daily driver.
A true test would be to do factory reset with everything at defaults, run a measured test, then decrypt and remove ZRAM and do a second test.
Regards
Phil
This is wrong. Disabling zram doesn't mean u are adding a swap to the flash, so the kernel isn't going to write to the flash.
You have no idea what you are talking about.
Sent from my Nexus 5X using Tapatalk

F2FS support for our Pixel 2 XL?

I've seen the Pixel 3 and later models have gained full F2FS filesystem support since F2FS implemented native support for file-based encryption.
Is this possible with our device, or did I miss something? Is anyone else looking for this to be implemented?
All flash devices seem to benefit long term from F2FS. It also gets repetitive having to run fstrim in terminal manually on a weekly basis for EXT4. A part of me says I'm nitpicking cause everything functions perfectly on this device. But I see even greater potential here with F2FS.
I get that some people get excited about new filesystems and see benchmarks and buy into it totally, but
i tested f2fs on my previous device, Galaxy Note 4 (and before that S4), and i was never convinced it did
anything look good in synthetic benchmarks. In every day use i couldnt have told you the difference.
But then one difference i can tell you, is that once you've had your first corrupt fs using f2fs, you'll quickly
retreat to the safety of ext4...ive never had a corrupt fs under ext4....
So i guess youre now going to say, "but newer devices use f2fs, so it cant be that unsafe" to which im going
to answer "yes, but theyve since tuned down or disabled some of the flags/features that people used to use to get
inflated synthetic benchmarks, so its no faster in real world than ext4"
And like i said, i could never tell the difference in real world usage between the two....
Also im pretty sure that on stock RM you dont need to manually run fstrim, i assume that would run on schedule
or during idle
Plus TWRP for the Pixel 2/2XL doesnt support formatting partitions in f2fs, so theres that
73sydney said:
I get that some people get excited about new filesystems and see benchmarks and buy into it totally, but
i tested f2fs on my previous device, Galaxy Note 4 (and before that S4), and i was never convinced it did
anything look good in synthetic benchmarks. In every day use i couldnt have told you the difference.
But then one difference i can tell you, is that once you've had your first corrupt fs using f2fs, you'll quickly
retreat to the safety of ext4...ive never had a corrupt fs under ext4....
So i guess youre now going to say, "but newer devices use f2fs, so it cant be that unsafe" to which im going
to answer "yes, but theyve since tuned down or disabled some of the flags/features that people used to use to get
inflated synthetic benchmarks, so its no faster in real world than ext4"
And like i said, i could never tell the difference in real world usage between the two....
Also im pretty sure that on stock RM you dont need to manually run fstrim, i assume that would run on schedule
or during idle
Plus TWRP for the Pixel 2/2XL doesnt support formatting partitions in f2fs, so theres that
Click to expand...
Click to collapse
Totally agreed on this tried it on nexus 7 2012 and Galaxy Nexus back in the day never was convinced and the downsides are way to much. Ext4 is overall more mature and the available toolkits are much better. If anyone wants to read about it goto the arch wiki, they had a comprehensive write-up on the different file systems.
EXT4 is indeed mature but it was also designed to run on spinning rust. Therefore it will always be a patchwork to keep it running efficiently on flash storage (scripts to run routine fstrim's)
@ReVo_007 you say look up the wiki on each file system; well if anyone were to do that they would realize f2fs is the optimal filesystem for flash storage since it is designed specifically for flash storage, including maximizing the life by distributing writes evenly across all data blocks and increasing read-write performance by building a cache of all data blocks, as well retaining the original read-write performance.
I hope to see f2fs on our device, the performance boost alone would be worth it, especially with apps that rely heavily on cache like Chromium.
I've tested f2fs on many devices, and actually the faster your processor the more it shines because the cache can run more efficiently, especially with native compression enabled. My desktop PC on Fedora 34 runs great, my cheap modded chromebook runs as fast as a high end one ever since I formatted all Fedora partitions as f2fs, a fun project of mine.
ThunderThighs said:
EXT4 is indeed mature but it was also designed to run on spinning rust. Therefore it will always be a patchwork to keep it running efficiently on flash storage (scripts to run routine fstrim's)
@ReVo_007 you say look up the wiki on each file system; well if anyone were to do that they would realize f2fs is the optimal filesystem for flash storage since it is designed specifically for flash storage, including maximizing the life by distributing writes evenly across all data blocks and increasing read-write performance by building a cache of all data blocks, as well retaining the original read-write performance.
I hope to see f2fs on our device, the performance boost alone would be worth it, especially with apps that rely heavily on cache like Chromium.
I've tested f2fs on many devices, and actually the faster your processor the more it shines because the cache can run more efficiently, especially with native compression enabled. My desktop PC on Fedora 34 runs great, my cheap modded chromebook runs as fast as a high end one ever since I formatted all Fedora partitions as f2fs, a fun project of mine.
Click to expand...
Click to collapse
There is no "performance boost" under real world conditions, in test on previous devices where f2fs WAS available the was only a difference in synthetic benchmarks, which is what everyone gets a boner about...the difference isnt enough for me to give up the stability of ext4
fstrims are handled by the OS in modern Android versions, no scripts needed...
on this device if youre not gaming, you'll do more by disabling swap than changing the fs to f2fs...
73sydney said:
There is no "performance boost" under real world conditions, in test on previous devices where f2fs WAS available the was only a difference in synthetic benchmarks, which is what everyone gets a boner about...the difference isnt enough for me to give up the stability of ext4
fstrims are handled by the OS in modern Android versions, no scripts needed...
on this device if youre not gaming, you'll do more by disabling swap than changing the fs to f2fs...
Click to expand...
Click to collapse
Just because you didn't notice anything and experienced instability on a 4+ year old device doesn't mean the entire filesystem is garbage.
I experience quite the opposite on my desktop PC and OnePlus One, infact not only do apps open faster, the system boots faster, large file transfers are 25% faster, and fstrim isn't ever needed.
BTW, android does have a scheduled fstrim but I've found its only ran on a monthly basis and this is unacceptable for how heavily I use my device.
You're also forgetting about the main draw to F2FS is the proven increase in service life of flash storage it's deployed on.
I never said the "filesystem is garbage", please dont assign words to me i never said, typical fanboy move, and a tired one at that
also, top tip, the entire slew of devs is not just waiting here for you to tell them whats not suitable for your usage and ready to jump
btw modern android does fstrim daily at idle, usually at night when youre all tucked up dreaming about how f2fs is the answer to every perceived issue
i have an S2 that still works, only ever had EXT fs on it...now a decade old, with far inferior flash memory on it, how much more life do you expect to have?
ive yet to meet anyone who has said to me "i just upgraded my phone because the flash finally died on it", theyre usually aspirational techno-numptie sheeple who think they need a new phone every 6 months, because giant skivvy told them they did. Apple really IS the root of all evil in modern tech
sorry but i have to debunk bunkum where i see it
If youre detemined to run fstrim more than the scheduled amount, feel free to give the following magisk module ive cooked just for you (and anyone else reading this) a crack
Only active file is fstrim.sh which will be written to /data/adb/service.d/fstrim.sh so its executed at boot
It merely checks the date of the /data/system/last-fstrim and if its been more than 24 hours, it will start an fstrim on /data, /cache/ and /system
Code:
removed due to ungratefulness
Please note: As this only executes after boot complete, it may take some time to start depending on system processes and other service scripts
And then theres always this (ad free) if you want even more control, or just want an app to do what my module does on a daily basis
Trimmer (fstrim) - Apps on Google Play
Trim your device NAND chip manually and fix lags.
play.google.com
You might also want to read this post:
mFSTRIM: A REAL, FOSS fstrim utility for Android, no root necessary
Hey XDA! I actually just posted about another app called Buoy, but over my spring break I went ham and made four apps and I wanted to show off two of them to get some feedback. So here's mFSTRIM! What is fstrim? So you know how hard drives get...
forum.xda-developers.com
MOD EDIT: Quote removed since post removed.
1) Not a sketchy 3rd party script, and i posted the content of it for safety&clarity. If thats sketchy you will probably hate most modules and zip files on XDA
2) If you can do it in a terminal why are you even raising an issue with fstrim? Feel free to enter the commands manually every time you need them, just to avoid taking any assistance offered....
3) I dont have a vendetta against anything, im merely giving an opinion, based on known facts - that you dont like facts is not my problem.
4) Ive given you more than a few links to apps etc to get you actual improvements without waiting for f2fs which will likely never come, because as already highlighted its not as simple as just adding ti to a kernel, youre welcome
MOD EDIT: A sentence repeating a portion of the deleted post and answering it deleted. Go and look elsewhere for help is my suggestion, you dont appear to want to take any advice given here....
I literally dont care what fs you use, that you even think that is troubling, you made that an issue, not me....
Accept the assistance offered or not, no one cares, but insulting me and the entire forum is unlikely to win you any further help
ThunderThighs said:
I've seen the Pixel 3 and later models have gained full F2FS filesystem support since F2FS implemented native support for file-based encryption.
Is this possible with our device, or did I miss something? Is anyone else looking for this to be implemented?
All flash devices seem to benefit long term from F2FS. It also gets repetitive having to run fstrim in terminal manually on a weekly basis for EXT4. A part of me says I'm nitpicking cause everything functions perfectly on this device. But I see even greater potential here with F2FS.
Click to expand...
Click to collapse
F2FS is designed with flash storage in mind. However, EXT4 is used because it has been proven to be more robust. In other words, the chances of corrupting your data are far smaller on EXT4 than on F2FS. Which is why, unless Google changed it for the Pixel 3 and onward, they use EXT4 for their partitions.
Is it possible on the Pixel 2 XL? Anything is possible, but if you want it, you're going to have to do the work yourself. The developers on the forum haven't implemented it in their ROMs, and they likely have a very good reason behind it. Ask them, and I'm sure they'll tell you.
Now, you claim F2FS is better than EXT4. Do you have empirical data to back that up, or is it simply your opinion? If the former, show your data so everyone can make an informed decision. If the latter, that's something you are entitled to, but your right to express that opinion stops when it starts trampling on the opinions of others.
MOD EDIT: Unnecessary comments removed.

Categories

Resources