Related
Am I stating the obivious here?
HTC handsets are getting faster and faster and yet I find the devices less and less responsive.
However, having installed a ROM with an older Windows build, I immediately found it to be faster (even when it's one running Cookies Home Tab).
(This was going first from "Energy.RHODIUM.23569.Sense2.5.Cookie.May.16" to "Energy.RHODIUM.21911.Sense2.5.Cookie.Jul.24" and then to "simplicity_3_September_21887_2016CHT").
Is this a general rule that seems to be overlooked in a quest to get the latest Microsoft title?
Is this like the "great advantages" of Windows 7 over Xp (EG NONE EXCEPT AERO SNAP AND A NEED FOR FASTER HARDWARE) (Caviat = I know there may be holes in this statement).
Anyway, besides ranting...
PLEASE CAN SOMEONE SUGGEST A MARINADED AND COOKED ROM WITH THE OLDEST WINDOWS BUILD AVAILABLE BECAUSE I AM ON A QUEST TO GET MY TP2 WORKING AS FAST AS AN ANDROID PLATFORM BUT STILL BE ABLE TO USE MY WM APPS?
give jackos old school rom a try!
Jacko's Oldschool is a relevation.
Nothing is as slick and I love it
Currently trying another Jacko Rom though.
profjekyll said:
Is this like the "great advantages" of Windows 7 over Xp (EG NONE EXCEPT AERO SNAP AND A NEED FOR FASTER HARDWARE) (Caviat = I know there may be holes in this statement).
Click to expand...
Click to collapse
Maybe I'm stating the obvious too... but obviously it is for support and compatibility with newer technologies and hardware? Which is true for Windows Mobile platforms, is it not?
eXilius333 said:
Maybe I'm stating the obvious too... but obviously it is for support and compatibility with newer technologies and hardware? Which is true for Windows Mobile platforms, is it not?
Click to expand...
Click to collapse
Wellllllll.... Yes, newer versions of Windows Do have much better hardware support, and DO have the ability to support newer runtime platforms like .Net and newer hardware interfaces such as DirectX. True.
I still manically wonder ifl Microsoft has a deal going with Major hardware companies to always tax the hardware as much as possible so faster machines are needed.
And while I'm on this Rant... DELL! What a bunch of gits, putting multiple security softwares (poor ones at that) on their systems (EG McAffee & Aol Security). Which are ticking time bomb waiting to nerf the computers out of waranty!
And to continue my Rant, it's all "Processor, Processor, Processor". How many of the shelf systems come with faster hard disks? The TRUE bottleneck of the day is not "Slightly faster CPU / RAM / FSB" (although this is of course nice). The Glaring bottleneck is Hard Disk thrashing (which takes place even if you have a squiggabyte of RAM) which has been engineered by our pals Microsoft. Having a 10k RPM disk, or even better an old 15K SCSI server disk (which are pretty cheap if you can manager the scuzzy SCSI nonsense) then your general PC performance increases more than the latest Ram type.
Anyway I digress. Anyone looked at Gonzo's? Is is possible that his ROM is fast because he uses a weird old Kitchen no one else does?
eXilius333 said:
Maybe I'm stating the obvious too... but obviously it is for support and compatibility with newer technologies and hardware? Which is true for Windows Mobile platforms, is it not?
Click to expand...
Click to collapse
And Another thing. After installing AeroSnap for XP I think it's better and more supported by far than 7 / Vista.
profjekyll said:
And Another thing. After installing AeroSnap for XP I think it's better and more supported by far than 7 / Vista.
Click to expand...
Click to collapse
PS - Thanks for helping me release my "Pentium" up aggression.
profjekyll said:
And Another thing. After installing AeroSnap for XP I think it's better and more supported by far than 7 / Vista.
Click to expand...
Click to collapse
Are you being serious?
Vista and, subsequently, 7 are far more robust as operating systems than XP in terms of networking, security, multi-core usage, memory usage (as in how it is used not how much is used), and compatibility for new technologies such as multi-touch capacitive touch screens, 64-bit (unless you're using the discontinued XP-64 bit which works on old and discontinued Itanium processors) which means total system memory is capped at 4GB, and well bluetooth advances and just a whole mess of other technologies that you can go look up yourself.
Do you only care about looks and speed? And don't mind blue screens of death from conflicting drivers from unreleased memory? Or maybe you like your background services more exposed, unnecessarily? I see...
I don't know how you formed your 'opinion' about XP or what "information" you used to form it.... but here (http://www.techradar.com/news/softw...red-windows-7-vs-vista-vs-xp-615167?artc_pg=1) is one of many articles about some of the differences between the operating systems... if you actually studied the architecture of 7 vs. XP I think you'd find your response about AeroSnap (lol?) substantially short-sighted...
Vista brought new version of NT kernel, featuring remaked memory management, support for new things etc. And re-optimalized from previous "swap as much as possible" to "use memory as much as possible with multi-core processor". That's why it behaves as it behaves on low RAM and singlecore CPUs. Comparing XP vs Vista (and later 7) is like comparing Windows Mobile 5.0 and Windows Phone 7 .
Vista was revolution for me, I used them 1.5 years without single issue or reinstall. I'm not kidding (Ultimate x64). Now it's year after I moved to Windows 7 Prof x64 and for past 2.5 years, I've never been happier with the system.
Anyway, new builds are not nescesarilly faster or slower, some are slower, some are faster. Eg 21910 can be slower than 21909, but 21911 can be again faster etc. Sometimes they bring some change in the drawing, some optimalization, next time they add some (for you hidden) feature.
Best combo for speed, RAM and user experience is IMHO my LBFAR WM6.5, featuring 21899 build, TF3D (from WM6.1 ROMs), over 320MB free ROM and 115MB free RAM, enabling really awesome multitasking, all running apps don't even fit on the window in tasklist.
Haha - your both wrong and you know it (about XP that is).
Thanks for the advice though!
profjekyll said:
And to continue my Rant, it's all "Processor, Processor, Processor". How many of the shelf systems come with faster hard disks? The TRUE bottleneck of the day is not "Slightly faster CPU / RAM / FSB" (although this is of course nice). The Glaring bottleneck is Hard Disk thrashing (which takes place even if you have a squiggabyte of RAM) which has been engineered by our pals Microsoft. Having a 10k RPM disk, or even better an old 15K SCSI server disk (which are pretty cheap if you can manager the scuzzy SCSI nonsense) then your general PC performance increases more than the latest Ram type.
Click to expand...
Click to collapse
profjekyll said:
Haha - your both wrong and you know it (about XP that is).
Click to expand...
Click to collapse
I don't mean to sound rude, but you're* wrong and you "don't" know it.
http://www.zdnet.com/blog/ou/how-higher-rpm-hard-drives-rip-you-off/322
Read that... or any of a countless articles on Hard Drive speeds... again I don't know where you get your information? I studied only a little bit of architecture for my Computer Science degree but it was enough to learn that HD RPMs are not the "true" bottleneck of anything... Solid State can potentially be faster but the RPM-based disks but this idea that RPMS are a bottleneck is outdated and does not account for today's programs or paradigm :/ I am terribly sorry to break it to you... opinions are not necessarily reality. :'(
On a final note, you really SHOULD try OndraSter's rom... you can even install 6.5.5 with TouchFlo3D which is supremely fast, why? Because it is Sense 2.1 and, even more so, 2.5 which are slowing down the performance of our TP2's not so much the OS increments. I definitely recommend his if you really want speed, memory, and an unbloated foundation to install whatever you wish...
At first I was going to let this bit slide, but... well...
profjekyll said:
And to continue my Rant, it's all "Processor, Processor, Processor". How many of the shelf systems come with faster hard disks? The TRUE bottleneck of the day is not "Slightly faster CPU / RAM / FSB" (although this is of course nice). The Glaring bottleneck is Hard Disk thrashing (which takes place even if you have a squiggabyte of RAM) which has been engineered by our pals Microsoft. Having a 10k RPM disk, or even better an old 15K SCSI server disk (which are pretty cheap if you can manager the scuzzy SCSI nonsense) then your general PC performance increases more than the latest Ram type.
Click to expand...
Click to collapse
That statement is just wrong in too many (almost all) ways and scenarios, it is about "processors, processors, processors" because applications are becoming richer, the web is becoming richer, programs that 'average' people--maybe not you--interact with are requiring more resources such as memory and processor 'time' and even if the disk was "faster" (such as potential SSDs) the system would have to wait for the processor to finish or the memory management (although I know little about this, but you can read further, yourself). The world is increasingly becoming more focused on "multi-programming" with dozens of rich web 2.0 sites opened in tabs, updates/patches possibly downloading, user content being uploaded or multimedia being downloaded, photo-editing, or rich software development... which are bottlenecked mostly by system memory, CPUS, GPU, and network speed, not simply "disk thrashing"... you may be used to a system with less than 4GB of assignable system memory that was pushed to it's fullest. Have you actually used Vista, 7, or their 64-bit versions for your everyday tasks with your everyday programs as you do XP, for a sufficient duration?
If you have enough system memory you won't experience "disk thrashing". That term, "thrashing" refers to a situation generally generated by your physical memory being full (or nearly full), and constant page swaps must occur... and it does NOT refer to normal page swap activity. That is not thrashing, but how the memory management algorithm was designed... most "work" lives in your RAM.
eXilius333 said:
I don't know where you get your information? I studied only a little bit of architecture for my Computer Science degree
Click to expand...
Click to collapse
Clearly.
Thanks for the WM help though.
eXilius333 said:
If you have enough system memory you won't experience "disk thrashing".
Click to expand...
Click to collapse
SOOOOOOOOO Wrong!
eXilius333 said:
HD RPMs are not the "true" bottleneck of anything... Solid State can potentially be faster but the RPM-based disks but this idea that RPMS are a bottleneck is outdated
Click to expand...
Click to collapse
SSD vs Traditional disks... both have advantages and disadvantes. SSD seem to have very fast access time but slow throughput meaning they are good for rewriting lots of small files, but poor at shovelling a lot of data. Traditional disks are the reverse...
I don't care what disk you go for, but the fact is disks really are one of the main bottlenecks in day to day computing.
eXilius333 said:
On a final note, you really SHOULD try OndraSter's rom... you can even install 6.5.5 with TouchFlo3D which is supremely fast, why? Because it is Sense 2.1 and, even more so, 2.5 which are slowing down the performance of our TP2's not so much the OS increments. I definitely recommend his if you really want speed, memory, and an unbloated foundation to install whatever you wish...
Click to expand...
Click to collapse
Thanks, that is VERY insightful, TF3D vs Sense... Thankyou thankyou.
Thrashing is not the same as background paging. The Windows Vista/7 paging system is tremendously more efficient compared to 2000/XP.
Vista's biggest flaw was a slight lag in foreground processing, which made it feel slower. Windows 7 changed the foreground priorities around.
As for WM builds, Sense is a dog. Especially once you go past 2.1. I basically made the Foundation ROM because I wanted a slimmed down 6.5 ROM with lots of free memory. I think it's as stable as 6.1. I've also given up on Sense and moved to SPB, which I think helps the stability and battery life.
i cant tell if hes trolling or serious.
yes, xp is faster in the fact that it requires less resources to get to boot.
for that matter, chromium devastates XP.
the hdd vs sdd:
SSDs do not have bad 'throughput', the only problem with SSD is that when you write numerous little files it still has to go through and add to the allocation table where they all are, which is true on every form of media. the ssd still writes faster than conventional HDDs.
read speeds are phenomenal and large file write speeds are quite good too.
cpu use:
XP only truly utilizes dual cores, after that it really starts losing efficiency. Vista was a failboat, for this post i'll only refer to win7 from here on. win7's kernel fixes the holes in xp's multicore flaws, as xp was never intended for multicore systems.
if you have a fresh install of xp, and a fresh install of win7 basic, give them both about 15 reboots, then boot them both. it will be about the same time, within a second of each other.
i'd love to finish this but i have other things i must attend to atm
Looks like everybody got stuck on P III 1.2GHz and 256MB RAM when Vista came out (which was avg PC on that time). I was using happily XP on Barton, about 2GHz with 2.5GB RAM. Hating Vista that time. Then I upgraded to E2200 with 2GB RAM. Reinstalled XP, everything was working fine. Then I upgraded to 4GB RAM and I was like "hey, now I have enough RAM, lets try 64bit."
Since XP64 weren't in my native lang (and I don't like EN windows) and it was pretty much abandoned by MS because it wasn't used heavily (like... 1%? maybe less?), I tried Vista Ultimate x64. Compared to XP, I felt alive, modern, able to multitask and system could use all my cores and all my RAM, without killing my harddrive. WHAT A FEELING!
I used Vista for 1.5 years with changing pretty much my whole PC (except motherboard), without a single issue. Compared to XP, which died in like... 4 months tops (still dont understand, how anybody's XP could last > half a year) it was huge difference. No slowdown after few days, everything was running fast. I switched about the time SP1 came out. And never looked back. Vista was awesome update for me. NT5.1 was too archaic for current hardware. You never should use new system on old hardware, old technologies (eg NetBurst vs Core2), because it is built for NEW hardware.
Also, Vista brought new options and directives for drivers, so many of them weren't compatible. That was major issue, but not on side of MS, but on side of OEMs, which didn't deliver drivers for new hardware in time. But after few months, there were drivers for Vista working OK.
Then Win7 came out with a bit enhanced priorities for processes (like the UI), also featuring new cool stuff and minor update to kernel (NT6.1).
Sorry, I just loved Vista and never understood anybody who hated them. If they would try them on correct hardware about a year after launch, when everything was fixed and drivers were available, everything would be different and Win7 could be delayed and featuring even more changes. All in all Vista, it was featuring huge kernel upgrade... Something like CE4 vs CE5.
But this is heavily OT, we should keep in lines of ROMs. If mod is about to delete it, let it there for a while so the people up can read it.
Rajinn said:
i cant tell if hes trolling or serious.(
Click to expand...
Click to collapse
Genuinely not trolling. Just ranting + convinced I am right.
Joe USer said:
As for WM builds, Sense is a dog. Especially once you go past 2.1. I basically made the Foundation ROM because I wanted a slimmed down 6.5 ROM with lots of free memory. I think it's as stable as 6.1. I've also given up on Sense and moved to SPB, which I think helps the stability and battery life.
Click to expand...
Click to collapse
Ok, so sense is a dog... I am getting that generally. 2.1 is pretty fast, but lacks hand buttons on the home page (as far as I can see). I want to be able to quickly press once, maximum twice from the home screen to phone my GF or open CoPilot etc.
I have never found this out... Perhaps someone can enlighted:
If I got a raw WM ROM without any sense on it (haha "no sense"... nvm) can you install Sense OR SPB OR ... as an alternative "front end" onto this using a .cab file or similar?
Or is this not possible, because Sense etc are very intrinsic to the build of WM?
If so, where can I get these .cabs?
What other front ends are there to WM?
At this rate, I expect I will end up learning C# and writing my own. Well, probably not.
So with the new Dual Core phones coming out I'm wondering... What's all the hullabaloo?
I just finished reading the Moto Atrix review from Engadget and it sounds like crap. They said docking to the ridiculously priced webtop accessory was slow as shiz.
Anyone who knows better, please educate me. I'd like to know what is or will be offered that Dual Core will be capable of that our current gen phones will NOT be capable of.
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life. If we end up having more data intensive apps and Android becomes more powerful multi-core cpu's will help a lot also. Naturally Android will need to be broken down and revamped to utilize multiple cores to their full potential though. At some point I can see Google using more or merging a large part of the desktop linux kernel to help with that process.
At the rate Android (and smart phones in general) is progressing, someday we may see a 64bit OS on a phone, we will definitely need multi-core cpu's then. I know, it's a bit of a dream but it's probably not too elaborate.
KCRic said:
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life.
Click to expand...
Click to collapse
I'd really, REALLY like to know how you came to that particular conclusion. While a dual core might not eat through quite as much wattage as two single cores, one that takes less is pure snakeoil IMO. I have yet to see a dual core CPU that is rated lower than a comparable single core on the desktop. Why would this be different for phones?
Software and OSes that can handle a dual core CPU need additional CPU cycles to manage the threading this results in, so if anything, dual core CPUs will greatly, GREATLY diminish battery life.
The original posters question is valid. What the heck would one need dual core CPUs in phones for? Personally, I can't think of anything. Running several apps in parallel was a piece of cake way before dual CPUs and more power can easily be obtained through increasing the clock speed.
I'm not saying my parent poster is wrong, but I sure as heck can't imagine the physics behind his statement. So if I'm wrong, someone please enlighten me.
I can see dual cores offering a smoother user experience -- one core could be handling an audio stream while the other is doing phone crap. I don't see how it could improve battery life though....
The theory is that two cores can accomplish the same thing as a single core while only working half as hard, I've seen several articles stating that dual cores will help battery life. Whether that is true I don't know.
Sent from my T-Mobile G2 using XDA App
Kokuyo, while you do have a point about dual cores being overkill in a phone I remember long ago people saying "why would you ever need 2gb of RAM in a PC" or "who could ever fill up a 1tb hard drive."
Thing is wouldnt the apps themselves have to be made to take advantage of dual cores as well?
JBunch1228; The short-term answer is nothing. Same answer as the average joe asking what he needs a quad-core in his desktop for. Right now it seems as much a sales gimmick as anything else, since the only Android ver that can actually make use of it is HC. Kinda like the 4G bandwagon everyone jumped on, all marketing right now.
Personally, I;d like to se what happens with the paradigm the Atrix is bringing out in a year or so. Put linux on a decent sized SSD for the laptop component, and use the handset for processing and communications exclusivley, rather than try and use the 'laptop dock' as nothing more than an external keyboard
As far as battery life, I can see how dual-cores could affect it positively, as a dual core doesnt pull as much power as two individual cores, and, if the chip is running for half as long as a single core would for the same operation, that would give you better batt life. Everyone keep in mind I said *if*. I don't see that happening before Q4, since the OS and apps need to be optimized for it.
My $.02 before depreciation.
Then there are the rumors of mobile quad-cores from Nvidia by Q4 as well. I'll keep my single core Vision, and see whats out there when my contract ends. We may have a whole new world.
KCRic said:
For one thing (my main interest anyway) dual core cpu's and beyond give us better battery life. If we end up having more data intensive apps and Android becomes more powerful multi-core cpu's will help a lot also. Naturally Android will need to be broken down and revamped to utilize multiple cores to their full potential though. At some point I can see Google using more or merging a large part of the desktop linux kernel to help with that process.
Click to expand...
Click to collapse
Wow, that's complete nonsense.
You can't add parts and end up using less power.
Also, Android needs no additional work to support multiple cores. Android runs on the LINUX KERNEL, which is ***THE*** choice for multi-core/multi-processor supercomputers. Android applications run each in their own process, the linux kernel then takes over process swapping. Android applications also are *already* multi-threaded (unless the specific application developer was a total newb).
At the rate Android (and smart phones in general) is progressing, someday we may see a 64bit OS on a phone, we will definitely need multi-core cpu's then. I know, it's a bit of a dream but it's probably not too elaborate.
Click to expand...
Click to collapse
What's the connection? Just because the desktop processor manufacturers went multi-core and 64bit at roughly the same time doesn't mean that the two are even *slightly* related. Use of a 64bit OS on a phone certainly does ***NOT*** somehow require that the processor be multi-core.
dhkr234 said:
Wow, that's complete nonsense.
You can't add parts and end up using less power.
Also, Android needs no additional work to support multiple cores. Android runs on the LINUX KERNEL, which is ***THE*** choice for multi-core/multi-processor supercomputers. Android applications run each in their own process, the linux kernel then takes over process swapping. Android applications also are *already* multi-threaded (unless the specific application developer was a total newb).
What's the connection? Just because the desktop processor manufacturers went multi-core and 64bit at roughly the same time doesn't mean that the two are even *slightly* related. Use of a 64bit OS on a phone certainly does ***NOT*** somehow require that the processor be multi-core.
Click to expand...
Click to collapse
The connection lies in the fact that this is technology we're talking about. It continually advances and does is at a rapid rate. No where in it did I say we'll make that jump 'at the same time'. Linux is not ***THE*** choice for multi-core computers, I use Sabayon but also Win7 seems to do just fine with multiple cores. Android doesn't utilize multi-core processors to their full potential and also uses a modified version of the linux kernel (which does fully support multi-core systems), that's whay I made the statement about merging. Being linux and being based on linux are not the same thing. Think of iOS or OSX - based on linux but tell me, how often do linux instuctions work for a Mac?
"you can't add parts and use less power", the car industry would like you clarify that, along with the computer industry. 10 years ago how much energy did electronics use? Was the speed and power vs. power consumption ratio better than it is today? No? I'll try to give an example that hopefully explains why consumes less power.
Pizza=data
People=processors
Time=heat and power consumption
1 person takes 20 minutes to eat 1 whole pizza while 4 people take only 5 minutes. That one person is going to have to work harder and longer in order to complete the same task as the 4 people. That will use more energy and generate much more heat. Heat, as we know, causes processors to become less efficient which means more energy is wasted at the higher clock cycles and less information processed per cycle.
It's not a very technical explanation of why a true multi-core system uses less power but it will have to do. Maybe ask NVidia too since they stated the Tegra processors are more power efficient.
KCRic said:
The connection lies in the fact that this is technology we're talking about. It continually advances and does is at a rapid rate. No where in it did I say we'll make that jump 'at the same time'. Linux is not ***THE*** choice for multi-core computers, I use Sabayon but also Win7 seems to do just fine with multiple cores.
Click to expand...
Click to collapse
Show me ***ONE*** supercomputer that runs wondoze. I DARE YOU! They don't exist!
Android doesn't utilize multi-core processors to their full potential and also uses a modified version of the linux kernel (which does fully support multi-core systems), that's whay I made the statement about merging. Being linux and being based on linux are not the same thing.
Click to expand...
Click to collapse
??? No, being LINUX and GNU/LINUX are not the same. ANDROID ***IS*** LINUX, but not GNU/LINUX. The kernel is the kernel. The modifications? Have nothing to do with ANYTHING this thread touches on. The kernel is FAR too complex for Android to have caused any drastic changes.
Think of iOS or OSX - based on linux but tell me, how often do linux instuctions work for a Mac?
Click to expand...
Click to collapse
No. Fruitcakes does NOT use LINUX ***AT ALL***. They use MACH. A *TOTALLY DIFFERENT* kernel.
"you can't add parts and use less power", the car industry would like you clarify that, along with the computer industry. 10 years ago how much energy did electronics use? Was the speed and power vs. power consumption ratio better than it is today? No? I'll try to give an example that hopefully explains why consumes less power.
Click to expand...
Click to collapse
Those changes are NOT RELATED to adding cores, but making transistors SMALLER.
Pizza=data
People=processors
Time=heat and power consumption
1 person takes 20 minutes to eat 1 whole pizza while 4 people take only 5 minutes. That one person is going to have to work harder and longer in order to complete the same task as the 4 people. That will use more energy and generate much more heat. Heat, as we know, causes processors to become less efficient which means more energy is wasted at the higher clock cycles and less information processed per cycle.
It's not a very technical explanation of why a true multi-core system uses less power but it will have to do. Maybe ask NVidia too since they stated the Tegra processors are more power efficient.
Click to expand...
Click to collapse
You have come up with a whole lot of nonsense that has ABSOLUTELY NO relation to multiple cores.
Energy consumption is related to CPU TIME.
You take a program that takes 10 minutes of CPU time to execute on a single-core 3GHz processor, split it between TWO otherwise identical cores operating at the SAME FREQUENCY, add in some overhead to split it between two cores, and you have 6 minutes of CPU time on TWO cores, which is 20% *MORE* energy consumed on a dual-core processor.
And you want to know what NVIDIA will say about their bloatchips? It uses less power than *THEIR* older hardware because it has **SMALLER TRANSISTORS** that require less energy.
Don't quite your day job, computer engineering is NOT YOUR FORTE.
dhkr234 said:
Show me ***ONE*** supercomputer that runs wondoze. I DARE YOU! They don't exist!
??? No, being LINUX and GNU/LINUX are not the same. ANDROID ***IS*** LINUX, but not GNU/LINUX. The kernel is the kernel. The modifications? Have nothing to do with ANYTHING this thread touches on. The kernel is FAR too complex for Android to have caused any drastic changes.
No. Fruitcakes does NOT use LINUX ***AT ALL***. They use MACH. A *TOTALLY DIFFERENT* kernel.
Those changes are NOT RELATED to adding cores, but making transistors SMALLER.
You have come up with a whole lot of nonsense that has ABSOLUTELY NO relation to multiple cores.
Energy consumption is related to CPU TIME.
You take a program that takes 10 minutes of CPU time to execute on a single-core 3GHz processor, split it between TWO otherwise identical cores operating at the SAME FREQUENCY, add in some overhead to split it between two cores, and you have 6 minutes of CPU time on TWO cores, which is 20% *MORE* energy consumed on a dual-core processor.
And you want to know what NVIDIA will say about their bloatchips? It uses less power than *THEIR* older hardware because it has **SMALLER TRANSISTORS** that require less energy.
Don't quite your day job, computer engineering is NOT YOUR FORTE.
Click to expand...
Click to collapse
If you think that its just a gimmick or trend then why does every laptop manufacturer use dual core or more and have better battery life than the old single core? Sometimes trends do have more use than aesthetic appeal. Your know-it-all approach is nothing new around here and you're not the only person who works in IT around. Theories are one thing but without any proof when ALL current tech says otherwise... makes you sound like a idiot. Sorry...
I bet I can pee further
Sent from my HTC Vision using XDA App
zaelia said:
I bet I can pee further
Sent from my HTC Vision using XDA App
Click to expand...
Click to collapse
The smaller ones usually can, I think it has to do with the urethra being more narrow as to allow a tighter, further shooting stream.
Sent from my HTC Glacier using XDA App
TJBunch1228 said:
The smaller ones usually can, I think it has to do with the urethra being more narrow as to allow a tighter, further shooting stream.
Sent from my HTC Glacier using XDA App
Click to expand...
Click to collapse
Well, you would know
sino8r said:
Well, you would know
Click to expand...
Click to collapse
It might be short but it sure is skinny.
Sent from my HTC Glacier using XDA App
sino8r said:
If you think that its just a gimmick or trend then why does every laptop manufacturer use dual core or more and have better battery life than the old single core? Sometimes trends do have more use than aesthetic appeal. Your know-it-all approach is nothing new around here and you're not the only person who works in IT around. Theories are one thing but without any proof when ALL current tech says otherwise... makes you sound like a idiot. Sorry...
Click to expand...
Click to collapse
+1
I was comparing speeds on the Atrix compared to the [email protected] and they matched. The Atrix was much more efficient on heat and probably with battery. The dual cores will use less power because the two cores will be better optimized for splitting the tasks and will use half the power running the same process as the single core because the single core runs at the same voltages for a single core compared to splitting it between two. Let's not start a flame war and make personal attacks on people
Sent from my HTC Vision with Habanero FAST 1.1.0
It is disturbing that there are people out there who can't understand this VERY BASIC engineering.
Voltage, by itself, has NO MEANING. You are forgetting about CURRENT. POWER = CURRENT x VOLTAGE.
Battery drain is DIRECTLY PROPORTIONAL to POWER. Not voltage. Double the voltage and half the current, power remains the same.
Dual core does NOT increase battery life. It increases PERFORMANCE by ***DOUBLING*** the physical processing units.
Battery life is increased through MINIATURIZATION and SIMPLIFICATION, which becomes *EXTREMELY* important as you increase the number of physical processing units.
It is the epitome of IGNORANCE to assume that there is some relation when there is not. The use of multiple cores relates to hard physical limitations of the silicon. You can't run the silicon at 18 GHz! Instead of racing for higher frequencies, the new competition is about how much work you can do with the SAME frequency, and the ***EASIEST*** way to do this is to bolt on more cores!
For arguments sake, take a look at a couple of processors;
Athlon II X2 240e / C3.... 45 watt TDP, 45 nm
Athlon II X4 630 / C3.... 95 watt TDP, 45 nm
Same stepping, same frequency (2.8 GHz), same voltage, same size, and the one with twice the cores eats more than twice the power. Wow, imagine that!
The X4 is, of course, FASTER, but not by double.
Now lets look at another pair of processors;
Athlon 64 X2 3800+ / E6.... 89 watt TDP, 90 nm
Athlon II X2 270u / C3.... 25 watt TDP, 45 nm
Different stepping, SAME frequency (2.0 GHz), same number of cores, different voltage, different SIZE, WAY different power consumption. JUST LOOK how much more power the older chip eats!!! 3.56 times as much. Also note that other power management features exist on the C3 that didn't exist on the E6, so the difference in MINIMUM power consumption is much greater.
Conclusion: There is no correlation between a reduction in power consumption and an increase in the number of PPUs. More PPUs = more performance. Reduction in power consumption is related to size, voltage, and other characteristics.
dhkr234 said:
Don't quite your day job, computer engineering is NOT YOUR FORTE.
Click to expand...
Click to collapse
Good job on being a douche. I didn't insult you in anything I said and if you disagree over my perspective then state it otherwise shut up. I didn't tell you english grammar isn't your forte so maybe you should keep your senile remarks to yourself.
You seem to want to argue over a few technicalities and I'll admit, I don't have a PhD in computer engineering but then again I doubt you do either. For the average person to begin to understand the inner-workings of a computer requires you to set aside the technical details and generalize everything. When they read about a Mac, they will see the word Unix which also happens to appear in things written about Linux and would inevitably make a connection about both being based off of the same thing (which they are). In that sense, I'm correct - you're wrong. The average person doesn't differentiate between 'is' and 'based off', most people take them in the same context.
So I may be wrong in some things when you get technical but when you're talking to the average person that thinks the higher the CPU core clock is = the better the processor, you end up being wrong because they won't give a damn about the FSB or anything else. Also, when you start flaming people and jumping them over insignificant things you come off as a complete douche. If I'm wrong on something then tactfully and politely correct me - don't try to act like excerebrose know-it-all. Let's not even mention completely going off track about about Windoze, servers aren't the only things that have multi-core processors.
I'm sure you'll try to multi-quote me with a slew of unintelligent looking, lame comebacks and corrections but in the end you'll just prove my point about the type of person you are. ****The End****
KCRic said:
Good job on being a douche. I didn't insult you in anything I said and if you disagree over my perspective then state it otherwise shut up. I didn't tell you english grammar isn't your forte so maybe you should keep your senile remarks to yourself.
Click to expand...
Click to collapse
Agreeing or disagreeing is pointless when discussing FACTS. Perspective has nothing to do with FACTS. You can think whatever you like, but it doesn't make you right.
You seem to want to argue over a few technicalities and I'll admit, I don't have a PhD in computer engineering but then again I doubt you do either.
Click to expand...
Click to collapse
Common mistake, assuming that everybody is the same as you. Try not to make that assumption again.
For the average person to begin to understand the inner-workings of a computer requires you to set aside the technical details and generalize everything.
Click to expand...
Click to collapse
Generalizations lead to inaccuracies. You do not teach by generalizing, you teach by starting from the bottom and building a foundation of knowledge. Rene Descartes (aka Renatus Cartesius, as in Cartesian geometric system, as in the father of analytical geometry) said that the foundation of all knowledge is that doubting one's own existence is itself proof that there is someone to doubt it -- "Cogito Ergo Sum" -- "I think therefore I am". Everything must begin with this.
When they read about a Mac, they will see the word Unix which also happens to appear in things written about Linux and would inevitably make a connection about both being based off of the same thing (which they are). In that sense, I'm correct - you're wrong. The average person doesn't differentiate between 'is' and 'based off', most people take them in the same context.
Click to expand...
Click to collapse
... and need to be CORRECTED for it. The two kernels (the only components relevant to this discussion) are completely different! MACH is a MICRO kernel, Linux is a MONOLITHIC kernel. Superficial characteristics (which are OUTSIDE of the kernel) be damned, they are NOT the same thing and thinking that they are is invalid. The average person is irrelevant, FACTS are FACTS.
So I may be wrong in some things when you get technical but when you're talking to the average person that thinks the higher the CPU core clock is = the better the processor, you end up being wrong because they won't give a damn about the FSB or anything else.
Click to expand...
Click to collapse
So are you trying to tell me that IGNORANCE is BLISS? Because "giving a damn" or not has NO BEARING on reality. The sky is blue. You think that its purple and don't give a damn, does that make it purple? No, it does not.
Also, when you start flaming people and jumping them over insignificant things you come off as a complete douche. If I'm wrong on something then tactfully and politely correct me - don't try to act like excerebrose know-it-all. Let's not even mention completely going off track about about Windoze, servers aren't the only things that have multi-core processors.
Click to expand...
Click to collapse
Right, servers AREN'T the only thing running multi-core processors, but did you not read where I SPECIFICALLY said **SERVERS**? Wondoze is off track and UNRELATED. I brought up servers because THEY USE THE SAME KERNEL AS ANDROID. If a supercomputer uses Linux, do you not agree that Linux is CLEARLY capable of multiprocessing well enough to meet the needs of a simple phone?
I'm sure you'll try to multi-quote me with a slew of unintelligent looking, lame comebacks and corrections but in the end you'll just prove my point about the type of person you are. ****The End****
Click to expand...
Click to collapse
... perfectionist, intelligent, PATIENT in dealing with ignorance. And understand that ignorance is not an insult when it is true, and contrary to common "belief", does NOT mean stupid. Learn the facts and you will cease to be ignorant of them.
So hopefully this train can be put back on the tracks...
From what I am understanding from more technical minded individuals, Dual Core should help with battery life because it requires less power to run the same things as single core. It can then probably be extrapolated that when pushed, Dual Core will be able to go well above and beyond its Single Core brethren in terms of processing power.
For now, it appears the only obvious benefit will be increased battery life and less drain on the processor due to overworking. Hopefully in the near future more CPU and GPU intensive processes are introduced to the market which will fully utilize the Dual Core's potential in the smartphone world. Thanks for all the insight.
dhkr234 - *slaps air high-five*
As pretty much everyone here is aware, there seems to be an obsession with using O3 for compiling binaries for this device. This obsession is probably due to the fact that O3 is the "most optimized" flag in GCC. The issue here is that all of these optimizations do not come without drawbacks.
Technically, due to the nature of the Galaxy Nexus as a mid-spec ARM-based device, we should be using Os to reduce the size of the code that needs to be run.
Also, there are many other drawbacks to O3, such as significantly larger binary size and possible instability, which is why it is not default in the Linux kernel. Binary size does not only impact the size on disk, but can also impact processing time of the code and the amount of space that the program takes in the CPU cache and RAM.
If somebody could please show actual benchmark data showing that O3 optimization actually is an improvement compared to O2 and Os on the Galaxy Nexus, I would really appreciate seeing why it is used on nearly every ROM and kernel.
Edit: I also just read up on Ofast, which disables some standard compliance by simplifying math. I wonder if this would cause any stability issues on the Galaxy Nexus. I'd really like to try -Os -ffast-math when I have time.
I'm sorry, I don't have the time to do that, but I can say this. In all the time I've spent tinkering with compiler optimisations, -O3 has rarely been worth it. Especially on a system like the OMAP 4460 which suffers more from IO bottlenecking than MIPS or FLOPS being the bottleneck. It seems Google's default is -O2 and they have guys who know things about compilers. I would be very curious about -Os though, since that's basically -O2 with the code-bloating features turned off. But I suspect there won't be a perceptible difference.
My guess (again, don't have time to test this) is that since 3 is a bigger nummer than 2, it's used by people who don't know precisely what it does, which seems to be the MO for a lot of people who create ROMs.
borizz27 said:
My guess (again, don't have time to test this) is that since 3 is a bigger nummer than 2, it's used by people who don't know precisely what it does, which seems to be the MO for a lot of people who create ROMs.
Click to expand...
Click to collapse
I also believe that this is the case... I've even seen a ROM on another phone "compiled with O4", which just uses O3 (anything >3 just sets the optimization level to 3)...
During my brief stint in writing patches for Gentoo Linux back when my desktop computer was slower than my phone is now, I read all kinds of weird stuff. People with --ffast-math on complaining that math was wrong, for example, or people on tiny systems calling for complete loop unroll.
The GCC website is quite clear in what the different -O levels do: http://gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Optimize-Options.html#Optimize-Options
I would find it very odd someone at Google hasn't had the same idea and actually tested the different Olevels. I'm guesing O2 is where it's at.
borizz27 said:
During my brief stint in writing patches for Gentoo Linux back when my desktop computer was slower than my phone is now, I read all kinds of weird stuff. People with --ffast-math on complaining that math was wrong, for example, or people on tiny systems calling for complete loop unroll.
The GCC website is quite clear in what the different -O levels do: http://gcc.gnu.org/onlinedocs/gcc-4.4.4/gcc/Optimize-Options.html#Optimize-Options
I would find it very odd someone at Google hasn't had the same idea and actually tested the different Olevels. I'm guesing O2 is where it's at.
Click to expand...
Click to collapse
Sure, and somebody who worked on Debian, Redhat, etc. also decided on O2 and it had just become a standard for stable, production-ready c builds.
Theoretically, due to the lack of sufficient cache on tiny ARM chips like the omap4, we should try to keep minimal code size through something like Os.
Also, I am assuming that ffast-math has improved because of the inclusion of Ofast in gcc4.6.
I hope to do some testing after I sync AOSP and fix errors with GCC 4.8
MДЯCЦSДИT said:
Sure, and somebody who worked on Debian, Redhat, etc. also decided on O2 and it had just become a standard for stable, production-ready c builds.
Theoretically, due to the lack of sufficient cache on tiny ARM chips like the omap4, we should try to keep minimal code size through something like Os.
Also, I am assuming that ffast-math has improved because of the inclusion of Ofast in gcc4.6.
Click to expand...
Click to collapse
I don't know. The 4460's cache is 1MB. While not huge, it's not tiny by any stretch of the imagination. However, I'm looking forward to your results. We can all keep guessing about what's best, but hard data will be better.
As far as I know, --ffast-math wasn't improved -- it still cuts the same corners and breaks the standard in the same places. -Ofast just combines -O3 with all the standards breaking options like --ffast-math.
A few users in the kernel asked about this subject. So I'm going to answer some questions, and provide some information and understanding here about what a "cpu bin" means (/d/acpuclk/pvs_bin) and more importantly, what it means to us as users.
I'll go ahead and snip my response from the kernel thread to get things rolling here so we can get a basic idea of it, and have an analogy to put into physicality of how this can be compared to something less "mysterious" for the every day guy wanting to understand.
Your CPU bin is the result of your device's inspection and test criteria at the Qualcomm factory. Basically a high bin CPU like a 5 or 6 is a very well made chip and very stable with very little imperfections in the manufacturing process. What this means for a HIGH bin is that the chip requires less voltage to operate at any frequency than, say, a bin 1. This is why you see some people having reboot issues when trying to under volt - their processor becomes unstable with less juice because of less accurate tolerances.
Think of it as friction. If you have a well oiled arm on a machine, and part of that arm's job is to force it's way through an opening repeatedly, and the tolerances on the arm and the opening are just slightly off... Well, for that "more out of tolerance" one to do the same amount of work as one that's parts were machined perfectly, it would require more force, because there is more friction inherently from a less accurate build process. Think of the machine as the CPU, the force required behind the arm being moved to carry out the work as the voltage required in your chip, and the tolerance of the parts as the same - just a different types of parts because of course, it is an analogy.
A higher CPU bin is, generally speaking, a more stable chip. Bus frequency, RAM speed, GPU speeds... Everything is directly related, in terms of stability and capable clock rate, to the chip's bin (or quality of build).
Here is an interesting article that most people will find shocking. Look at the difference is clock rates of low and high binned chips:
http://www.androidbeat.com/2013/09/difference-snapdragon-800-2-2ghz-2-3ghz/#.Uwdf2p_TnqA
Note the example of the HTC One and Galaxy S4. It is obvious that Qualcomm sold their higher end chips to Samsung, while HTC was given the less quality chips. Same chip. Same theoretical clock rate. However, the chips in the HTC One are different in 1 aspect - their build quality, and therefore, their capable clock rate and their stable clock rate.
Click to expand...
Click to collapse
And of course the end of that article reallllyy sums up the bottom line here for those of us who like to overclock:
So, you mean to say I should avoid any Android device that uses a Snapdragon S800 SoC running at 2.2GHz, and not 2.3GHz? No! The S800 is the fastest SoC available from Qualcomm, and the slight difference in performance between the different bins should not affect your final decision at all. The S800 is more than future-proof so don’t worry about the slight difference in clock speed.
However, if you are a benchmark junkie or love to overclock your device, better get an Android device that uses the higher binned S800.
Click to expand...
Click to collapse
It is important to note, that while there is a slight difference in performance (of course at stock speeds) there is a huge difference in stability when you start adding non-calculated variables when the processor was given it's bin number.... over clocking... and under volting - both common things that are added into a device's operation after rooting and installing a new kernel.
A small tid bit of information to think about:
A bin 6 runs the stock MAX frequency with only 950 MV...
A bin 0 runs the stock MAX frequency with 1100 MV...
150mv difference! You can see the example of my "machine analogy" can't you? Less is required, to do the same amount of work.
SO, what does all of this mean anyways? Well, to sum it all up, it simply means that you should be aware of your device's capabilities. KNOW YOUR BIN!
With a file explorer, navigate to:
Code:
/d/acpuclk/pvs_bin
And if you are running 4.4.2 KitKat:
Code:
/sys/devices/system/soc/soc0/soc_pvs
There will be a number there 0 - 6
If you are an overclock junkie, higher the number the better.
A lower number like a 0 or 1 will simply mean that you will not be able to get away with as much overclocking and under volting. You kind of just are what you are. If you are a 0 or 1, or even a 2 and you are overclocking and under volting your device and having reboots... well, luck of the draw. Your chip just needs that extra juice to operate, it is a physically binding attribute. Set, and test. Set, and test. Find out where your device is comfortable and what it can handle and accept it.
There is a lot more information that I will add later - specifically about how the different bins are more or less power friendly.
I hope this sheds some light for those who want to understand this.
For those of you wanting to know the guts (as a result of, again, PMs) Keep reading...
BREAKING IT DOWN - A Tale of Two Snapdragons
The test methods involved in "binning" chips. What I am about to explain is what, quite honestly, few people know. This is because the process of testing goes on behind closed doors at Qualcomm, but is common practice in manufacturing anything mechanical or electronic. Quality Control is why you have these "binned" CPUs. It is basically the result of a set of tests run on the chip to examine extremes in variation of fabrication of the chips. No chip, or manufactured part for that matter, is exactly the same as the next, simply because of manufacturing variables that cause the manufactured design to have slight variations or even defects. The silicon of the chip being exposed to undesirable or slightly out of "tolerance" environmental temperatures, for example, could have an effect on the quality of the end product. It is a very controlled process, and the "process corner" as it is called or design of experiments, it a process used to test, evaluate, and graph the uncontrolled moments of that particular part's manufacturing journey.
All of this translates to robustness of a design. After Qualcomm builds the processor, they want to know how this device will perform under different extreme conditions! Simple logic! If I build something, I want to know how it will handle stress, right? But I don't want to damage the ones I have already built. What they do is replicate these possible manufacturing defects in something called "corner lots". Corner lots have had manufacturing process parameters adjusted according to these extremes. They will then test the devices made from these special "test wafers". Typically, for CPUs, and I know at Qualcomm, they will run voltage tests, clock frequency, temperature.
For voltage, for example, the idea there is to push the device to it's maximum and minimum capability at various clock frequencies, to determine it's stability threshold. Any of you other Engineers out there of the electrical type (I am Mechanical, however) have heard a "shmoo plot" which is basically these test functions graphed as hard data. Based on how the chip performs, it is given a number. A well made chip has less manufacturing variation, obviously, and passes the tests with flying colors, shows very desirable characterization traits during the test method, and is given a bin 6 - just as an example. Another chip does ok, is a little less stable overall than the previous, but is still acceptable based upon the design and engineering criteria, and is given a 0 - barely passing the characterization "test".
So back to the beginning. What is CPU binning? What does the number mean?
Well, based up the pvs tables in the source code, it is obvious that the bin 6's are the ones with less VFP (variation of fabrication parameters) because they require less voltage, and at the same time are clocked higher in the GPU, CPU, and RAM and bus. Less force is required to get the job done. A bin 6 would be comparable to the car you bought that finally took a dump a 300k miles while the other one of similar make, model and year died at 200k. Variation in manufacturing. It applies to everything in industry, not just cars and machined parts made of steel. That is what the tests are designed to do. That is why your voltage tables from one device to the next will vary slightly. That is why one person can run this kernel, and this person can't and why one person can under volt their device 35mv while you cannot.
That is what your cpu "bin" represents people. Simply the physical results of some tests done by some engineers to determine your particular processor's compliance to tolerances as it was being built.
Reserved for images of pvs tables. Note the difference in the voltage tables to the right of the frequency steps.
Tmobile note 3, my number is 3. Thanks for the lesson and helpful info.
I wish you got your scale backwards, my T-Mobile Note 3 is a 1.
Edit: Checked my wife's phone and she has a 0. These phones are less than a month old. Wonder if it is just hit and miss per batch or if they started buying cheaper chips.
You are a beacon of knowledge.. Another great write up.. tappin that thanks
Sent from my SPH-L720 using Tapatalk
hmmm interesting write up. I had no clue about CPU binning. Mine is a value of 5. Is there supposed to be anything else in there? It's just a 5, nothing else.
Got a 3.
I don't do any CPU tweaks, but it's nice to know for future reference.
Thx
Sent from my SM-N900T using xda premium
rjohnstone said:
Got a 3.
I don't do any CPU tweaks, but it's nice to know for future reference.
Thx
Sent from my SM-N900T using xda premium
Click to expand...
Click to collapse
yeah this phone doesn't need overclocking. I'm stock deodexed with bloat removed and I've never once had the phone stutter or lag once and apps open consistently faster then any phone I've ever had prior to this one (including my S4). Though I do maintenance (wipe Dalvik/cache) every 3 or 4 days.
cun7 said:
With a file explorer, navigate to:
Code:
/d/acpuclk/pvs_bin
There will be a number there 0 - 6
If you are an overclock junkie, higher the number the better.
.
Click to expand...
Click to collapse
I am on Project X Kit Kat rom - I used Root Explorer to see if I could locate this - I found d/ folder but I coulid not find anyhting named anything close to acpuclk/pvs_bin
maybe I am looking in the wrong place? Any guidance would be helpful...
thanks
Eric214 said:
Mine is a value of 5. Is there supposed to be anything else in there? It's just a 5, nothing else.
Click to expand...
Click to collapse
I hate you.
Sent from another galaxy
Also have a 3 on my Samsung Galaxy Note 3 (T-Mobile - Stock 4.3)
3 here
Sent from my SM-N900T using Tapatalk
Can anybody with high bin (3-6 lets say) post top and bottom values from acpu_table in the same directory? Just wanted to see how those settings differ at 300MHz idle and 2265MHz full speed to mine with bin 1.
this thread has been addressed before, and the problem with this is that if you ever flashed a rom on your phone, then the bin number will change to that person who made the rom. therefor, the only way this works is if you never flashed a rom or you have your back up rom. Am i missing something ?
---------- Post added at 12:11 AM ---------- Previous post was at 12:08 AM ----------
oh ok maybe i was wrong sorry
pete4k said:
Can anybody with high bin (3-6 lets say) post top and bottom values from acpu_table in the same directory? Just wanted to see how those settings differ at 300MHz idle and 2265MHz full speed to mine with bin 1.
Click to expand...
Click to collapse
I'm running a 3 and I have the same settings.
300 idle and 2265 max.
Sent from my SM-N900T using xda premium
Updated post number 2 with images of the pvs table source code. Look at the difference in voltage levels based on the bin number! Quite a difference guys!
Mine doesn't have said folder lol
Sent from my SM-N9005 using Tapatalk
mocsab said:
I am on Project X Kit Kat rom - I used Root Explorer to see if I could locate this - I found d/ folder but I coulid not find anyhting named anything close to acpuclk/pvs_bin
maybe I am looking in the wrong place? Any guidance would be helpful...
thanks
Click to expand...
Click to collapse
Its not there in kitkat roms.
/d/acpuclk/
Not /acpuclk/
On Kitkat it is located at:
Code:
/sys/devices/system/soc/soc0/soc_pvs
The numbers I get in the CPU load string #SCPUAVG# seem to be...odd.
Right now I am getting 2.38 4.69 6.24
I get that these are the standard 1, 5 and 15 min averagaes in Linux. What I don't get is how I am getting wacked out numbers way over 1 per core (S4, so quad core) on the longer averages. I kind of doubt I am at over 150% averaged out over 15 minutes.
So what I am I not getting?
brizey said:
The numbers I get in the CPU load string #SCPUAVG# seem to be...odd.
Right now I am getting 2.38 4.69 6.24
I get that these are the standard 1, 5 and 15 min averagaes in Linux. What I don't get is how I am getting wacked out numbers way over 1 per core (S4, so quad core) on the longer averages. I kind of doubt I am at over 150% averaged out over 15 minutes.
So what I am I not getting?
Click to expand...
Click to collapse
If you want to understand what these value mean, this is a rather good article that explains how to interpret them in single- and multi-core environment: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages
kwerdenker said:
If you want to understand what these value mean, this is a rather good article that explains how to interpret them in single- and multi-core environment: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages
Click to expand...
Click to collapse
Thanks. I had found that article. Still not sure how I could possibly be over 100% load averaged out over 15 minutes pretty much all the time, but maybe I am?
I'm building a Zooper template that is somewhat overly informative. I may just leave this out. The main point of the template is not the info, its the integration using tasker to manage state and to run scripts, etc.