Related
So I've noticed that the core OS and native Microsoft apps all run butter-y smooth because of the UI GPU acceleration. However third party apps are much less consistent.
I've been trying to find out but haven't been able to do so - do 3rd party apps also have access to GPU UI acceleration? If so is this by default or does the developer need to specifically enable this feature?
I'm asking as I'm trying to work out why some apps are so laggy (WhatsApp & TVShows to name two) but others seem to be almost as smooth as the native apps (e.g. NextGen Reader & Simple Paper). Is it down to lack of UI acceleration or do they just need more optimisation (in respect to the code).
Yeap, I would like to know too. Any developer?
I've done more digging but since I'm not a developer I'm not sure I've understood the documents I've found correctly.
I believe certain UI elements in 3rd party apps are automatically GPU accelerated but for others the developer has to manually enable this. If so this would explain why some apps are smoother than others and points to developer optimisation being required to get good performance (unlike iOS which seems to accelerate everything by default - or at least it seems that way judging by app performance).
Any developers care to correct me or comment?
If so, what's the point to require developers to activate it? It wouldnt be better if everything was accelerated by default?
I was under the impression that acceleration so enabled by default, though like with any os, some apps are just poorly written
The UI acceleration is not ON by default, you have to manually enable using toolkit called Transitions AFAIK
scoobysnacks said:
I was under the impression that acceleration so enabled by default, though like with any os, some apps are just poorly written
Click to expand...
Click to collapse
Yeah most Silverlight elements should be. From what I've read most of the core OS interface itself is compiled with a sort of a Hybrid SDK which runs a variant of xaml etc. natively.
As for 3rd party apps, it's mostly down to the developers themselves. Before Mango it was a nightmare trying to even get smooth scrolling on listboxes without using a ton of memory. And they've just recently in the 7.1 sdk added a performance analysis tool which I'm sure many developers are still oblivious of its existence.
What I've noticed though, is on my Omnia W, more heavy panorama apps like metrotube or imdb are running much more smoothly than on my Trophy due to the faster hardware.
GPU acceleration is On for all application by default.
But, if application working slow, this mean developer write programm not with "Windows Phone 7 Guidelines" and he not understand lifecycle of application.
omohat said:
So I've noticed that the core OS and native Microsoft apps all run butter-y smooth because of the UI GPU acceleration. However third party apps are much less consistent.
I've been trying to find out but haven't been able to do so - do 3rd party apps also have access to GPU UI acceleration? If so is this by default or does the developer need to specifically enable this feature?
I'm asking as I'm trying to work out why some apps are so laggy (WhatsApp & TVShows to name two) but others seem to be almost as smooth as the native apps (e.g. NextGen Reader & Simple Paper). Is it down to lack of UI acceleration or do they just need more optimisation (in respect to the code).
Click to expand...
Click to collapse
Please read in the XNA forumx for a how to and downloads its very help full at least for me
The Base OS Stock Applications and OEM Apps may use Native Code. That is why they run better. That's why HTC Hub is so damn smooth, but 3rd party apps that use a lot of graphics resources just aren't.
It's disingenuous to assume that it's the developers' fault. Didn't people do the same exact thing when others were complaining about the list scrolling issues?
Additionally, much of the stock Apps are almost devoid of graphics. Messaging, Email, Calendar, etc. There isn't much graphics to render, so the phone has no issues giving great performance. I'm almost sure Microsoft utilized Native Code for Internet Explorer, and probably Zune functionality as well.
There's a huge difference in performance when using something like Zune or Internet Explorer, even when there is a ton of graphical resources being displayed on the screen (tons of album art, etc.) and a 3rd party application like Pulse which lags like crazy.
N8ter said:
The Base OS Stock Applications and OEM Apps may use Native Code. That is why they run better. That's why HTC Hub is so damn smooth, but 3rd party apps that use a lot of graphics resources just aren't.
It's disingenuous to assume that it's the developers' fault. Didn't people do the same exact thing when others were complaining about the list scrolling issues?
Additionally, much of the stock Apps are almost devoid of graphics. Messaging, Email, Calendar, etc. There isn't much graphics to render, so the phone has no issues giving great performance. I'm almost sure Microsoft utilized Native Code for Internet Explorer, and probably Zune functionality as well.
There's a huge difference in performance when using something like Zune or Internet Explorer, even when there is a ton of graphical resources being displayed on the screen (tons of album art, etc.) and a 3rd party application like Pulse which lags like crazy.
Click to expand...
Click to collapse
why is it only a very small amount then if it isn't poor coding?
Hardly any apps do it, which makes you assume it is the development.
A lot of apps perform poorly compared to the stock apps. Apps that do not use a lot of graphical resources are better off since the hardware has hardly anything to render, anyways.
Sent from my SGH-T959 using Tapatalk
N8ter said:
That's why HTC Hub is so damn smooth, but 3rd party apps that use a lot of graphics resources just aren't.
Click to expand...
Click to collapse
Please, don't confuse people.
That's not a reason for this. HTC Hub uses native code only for accessing to the registry using internal drivers.
N8ter said:
Additionally, much of the stock Apps are almost devoid of graphics. Messaging, Email, Calendar, etc. There isn't much graphics to render, so the phone has no issues giving great performance. I'm almost sure Microsoft utilized Native Code for Internet Explorer, and probably Zune functionality as well.
There's a huge difference in performance when using something like Zune or Internet Explorer, even when there is a ton of graphical resources being displayed on the screen (tons of album art, etc.) and a 3rd party application like Pulse which lags like crazy.
Click to expand...
Click to collapse
Every stock application, including even settings, are native and use an UIX (also called Silverlight for C++, mb I'm wrong). They don't use a managed code.
Ok, let's explain this.
Hardware acceleration is TURNED OFF by default, except a few elements.
If hardware acceleration is turned on for an element, what happens?
First - when element's property with a special attribute (like Background) changes, CPU RENDERS AN ELEMENT TO THE BITMAP
Second - Every time, when the region was overriden, on the phone's screen an ELEMENT ISN'T DRAWING, BUT THE BITMAP (cache) DRAWS.
Pros: cpu isn't using when perfoming transformations or changing opacity, because it's doing a gpu
Cons: if element is changing very often (except transformations and opacity) cpu must always redraw the bitmap
Also the very huge impact on perfomance is doing the CLR. Managed code usually much slower than native, even using JIT. Also additive impact is doing the managed wrappers and BCL classes.
Sorry for my english, I hope you've understood me. You can try my application (below in the signature) to see how have I am optimized it. It runs very smooth, but xaml parsing and managed wrappers are consuming too many cpu's time, that's why page transitions aren't as fast as the native.
Useless guy said:
Please, don't confuse people.
That's not a reason for this. HTC Hub uses native code only for accessing to the registry using internal drivers.
Every stock application, including even settings, are native and use an UIX (also called Silverlight for C++, mb I'm wrong). They don't use a managed code.
Ok, let's explain this.
Hardware acceleration is TURNED OFF by default, except a few elements.
If hardware acceleration is turned on for an element, what happens?
First - when element's property with a special attribute (like Background) changes, CPU RENDERS AN ELEMENT TO THE BITMAP
Second - Every time, when the region was overriden, on the phone's screen an ELEMENT ISN'T DRAWING, BUT THE BITMAP (cache) DRAWS.
Pros: cpu isn't using when perfoming transformations or changing opacity, because it's doing a gpu
Cons: if element is changing very often (except transformations and opacity) cpu must always redraw the bitmap
Also the very huge impact on perfomance is doing the CLR. Managed code usually much slower than native, even using JIT. Also additive impact is doing the managed wrappers and BCL classes.
Sorry for my english, I hope you've understood me. You can try my application (below in the signature) to see how have I am optimized it. It runs very smooth, but xaml parsing and managed wrappers are consuming too many cpu's time, that's why page transitions aren't as fast as the native.
Click to expand...
Click to collapse
a concise answer from someone who actually knows what they're talking about.
Thanks for that!
Yes, 3rd party apps that run in the silverlight / XNA sandbox do actually have GPU Acceleration turned on by default. If an app is not as smooth as the native apps, its because it was designed badly. You can checkout a number of apps such as Feed Me, Fast XDA, etc that are exceptionally fast and quite nicely depict native apps
Agreed - it's mostly down to poor developement practices, from developers who haven't properly read through the documention.
For example, parsing data on the UI thread is just stupid - should always be done on a background thread.
The amount of applications that also DONT BitmapCache images, therefore leaving most of them being constantly drawn by the CPU is ridiculous - and this one of the main reasons a lot of image intensive apps slow down - poor coding. That, and the developers are also using UI thread decoding for the images themselves, and not using background thread decoding introduced in Mango which improves interaction performance alot.
Marshalling updates for the UI thread using Dispathcer.BeginInvoke(() => Dispathcer.BeginInvoke(() => .... )) is also a good idea, as it tends to mean it has to wait for the UI thread to be cleared before invoking the code you sent through.
There are other instances where they're just trying to load to animate far too much at one time - instead of lining up animations and updates one by one, they're just letting things all try and run at the same time and bottleneck each other.
Effective use of BitmapCache's, and marshalling updates in queues rather than free-for-all, and NOT changing visibilites on UI elements, as this forces them to be redrawn. A better, though more code intensive way is to set the opacity to zero and disable hit testing. That way the resources are still oaded on the GPU, and won't have to redrawn every time they're shown.
It's certainly not possible to get the apps to perform as good as native (Silverlight is still quite... "performance hungry", especially in regards to XAML parsing), but you can certainly get them to be really acceptable, and personally I think my apps stand out as some of the better performing ones (complete with animations and lots of data - search "Springy for formspring" or "Artist Info" on your phones)
~Johnny said:
Agreed - it's mostly down to poor developement practices, from developers who haven't properly read through the documention.
Click to expand...
Click to collapse
Indeed, yes it is!
We should look at some open source WP7 app code to see what the devs are doing.
N8ter said:
We should look at some open source WP7 app code to see what the devs are doing.
Click to expand...
Click to collapse
Those open source programs are usually written by the codemonkeys.
Useless guy said:
Those open source programs are usually written by the codemonkeys.
Click to expand...
Click to collapse
That's the whole point. To look at them and see what they're doing wrong.
Duh?
Was on the portal and noticed this:
Hey everyone,
So, I was experiencing significant lag as we all do from time to time, and decided I was going to get to the bottom of it.
After tracing and debugging for hours, I discovered the source of 90% of Android's lag. In a word, entropy (or lack thereof).
Google's JVM, like Sun's, reads from /dev/random. For all random data. Yes, the /dev/random that uses a very limited entropy pool.
Random data is used for all kinds of stuff.. UUID generation, session keys, SSL.. when we run out of entropy, the process blocks. That manifests itself as lag. The process cannot continue until the kernel generates more high quality random data.
So, I cross-compiled rngd, and used it to feed /dev/urandom into /dev/random at 1 second intervals.
Result? I have never used an Android device this fast.
It is literally five times faster in many cases. Chrome, maps, and other heavy applications load in about 1/2 a second, and map tiles populate as fast as I can scroll. Task switching is instantaneous. You know how sometimes when you hit the home button, it takes 5-10 seconds for the home screen to repopulate? Yeah. Blocking on read of /dev/random. Problem solved. But don't take my word for it .. give it a shot!
Update!
I've built a very simple Android app that bundles the binary, and starts/stops the service (on boot if selected). I'll be adding more instrumentation, but for now, give it a shot! This APK does not modify /system in any way, so should be perfectly safe.
This is my first userspace Android app, so bear with me!
Note that this APK is actually compatible with all Android versions, and all (armel) devices. It's not at all specific to the Captivate Glide.
Caveats
There is a (theoretical) security risk, in that seeding /dev/random with /dev/urandom decreases the quality of the random data. In practice, the odds of this being cryptographically exploited are far lower than the odds of someone attacking the OS itself (a much simpler challenge).
This may adversely affect battery life, since it wakes every second. It does not hold a wakelock, so it shouldn't have a big impact, but let me know if you think it's causing problems. I can add a blocking read to the code so that it only executes while the screen is on. On the other hand, many of us attribute lag to lacking CPU power. Since this hack eliminates almost all lag, there is less of a need to overclock, potentially reducing battery consumption.
If you try it, let me know how it goes.
ROM builders - feel free to integrate this into your ROMs (either the .apk / application, or just the rngd binary called from init.d)!
If anyone's interested, I've launched a paid app on the Play store for non-xda users. As I add features I'll post the new versions here as a thanks to you guys (and xda community at large for being such a great resource). But if anyone's interested in the market's auto-update feature, just thought I'd mention it.
Cheers!
Click to expand...
Click to collapse
Should this help with the lag that we get on the Play?
If anyone else wants to try it heres the link to the thread:
http://forum.xda-developers.com/showthread.php?t=1987032
I tried it and it i got faster loading on some minor stuff (like contact picture loading) and apps installed on the internal memory seems to load faster, in terms of UI smoothness I don't notice any difference, because UI is smooth since the beginning
I think i may try it out although i don't see any instructions
Sent from my ASUS Transformer Pad TF300T using xda app-developers app
BTW, somebody already posted this in the XPlay Android Dev section:
http://forum.xda-developers.com/showthread.php?t=2073382
Those who are still wondering what ART is....
Just watch this video
https://www.youtube.com/watch?v=U5oliXcOqxg&feature=youtube_gdata_player
I AM NOT POSTING THIS AS FOR PROMOTION OF ANY CHANNEL OR ORGANIZATION ....IT IS JUST FOR INFORMATION O
D5+/cm10.2/1.2GHz.
Sent from Tapatalk app
coolshahabaz said:
Those who are still wondering what ART is....
Just watch this video
https://www.youtube.com/watch?v=U5oliXcOqxg&feature=youtube_gdata_player
I AM NOT POSTING THIS AS FOR PROMOTION OF ANY CHANNEL OR ORGANIZATION ....IT IS JUST FOR INFORMATION O
D5+/cm10.2/1.2GHz.
Sent from Tapatalk app
Click to expand...
Click to collapse
In words,
Dalvik Runtime is being replaced with next generation ART (android runtime) in Android 5.x or later. Google has introduced it now so that developers can start testing their apps with it.
The older Dalvik uses what is called JIT runtime compiler which basically compiles java dex code to optimized native code (C binary) at the runtime.
The next gen ART uses AOT (Ahead of time) compiler which optimizes and converts java/dex code to Native during installation (ahead of time).
Both have their pros and cons, but overall AOT is faster than JIT. AOT is better at doing fast things really fast e.g. Scrolling a page in app (list, images, webview, etc) would be faster with AOT (used by ART), less often it would need to reclaim memory (GC -> garbage collector) vs. JIT.
JIT used by Dalvik has added advantages of being able to optimize java/dex code to native better, but it takes time to do so.
Overall ART should give faster launch times, smoother scrolling and better battery. Its still in alpha stage, we probably will see ART in Android 5.0 L-release doing wonders.
Enable ART on any Stock ROM
But hey, from what I've read, ART pre-compiles the apps, and thus they don't need to run on a VM anymore. Isn't that right? But one advantage of VM is that apps are seperated from each other and the system. Thus app crashes, malware and virusses don't affect the system. So... if you are using ART, will the system now become more vulnarable? Are we gonna need virus scanners for real now with ART?
I DO NOT take any credit for the information provided. Just helpful information.
What Is ART?
ART, which stands for Android Runtime, handles app execution in a fundamentally different way from Dalvik. The current runtime relies on a Just-In-Time (JIT) compiler to interpret bytecode, a generic version of the original application code. In a manner of speaking, apps are only partially compiled by developers, then the resulting code must go through an interpreter on a user's device each and every time it is run. The process involves a lot of overhead and isn't particularly efficient, but the mechanism makes it easy for apps to run on a variety of hardware and architectures. ART is set to change this process by pre-compiling that bytecode into machine language when apps are first installed, turning them into truly native apps. This process is called Ahead-Of-Time (AOT) compilation. By removing the need to spin up a new virtual machine or run interpreted code, startup times can be cut down immensely and ongoing execution will become faster, as well.
At present, Google is treating ART as an experimental preview, something for developers and hardware partners to try out. Google's own introduction of ART clearly warns that changing the default runtime can risk breaking apps and causing system instability. ART may not be completely ready for prime time, but the Android team obviously feels like it should see the light of day. If you're interested in trying out ART for yourself, go to Settings -> Developer options -> Select runtime. Activating it requires a restart to switch from libdvm.so to libart.so, but be prepared to wait about 10 minutes on the first boot-up while your installed apps are prepared for the new runtime. Warning: Do not try this with the Paranoid Android (or other AOSP) build right now. There is an incompatibility with the current gapps package that causes rapid crashing, making the interface unusable.
How Much Better Is It?
For now, the potential gains in efficiency are difficult to gauge based on the version of ART currently shipping with KitKat, so it isn't representative of what will be possible once it has been extensively optimized. Thus far, estimates and some benchmarks suggest that the new runtime is already capable of cutting execution time in half for most applications. This means that long-running, processor-intensive tasks will be able to finish faster, allowing the system to idle more often and for longer. Regular applications will also benefit from smoother animations and more instantaneous responses to touch and other sensor data. Additionally, now that the typical device contains a quad-core (or greater) processor, many situations will call for activating fewer cores, and it may be possible to make even better use of the lower-powered cores in ARM's big.LITTLE architecture. How much this improves battery life and performance will vary quite a bit based on usage scenarios and hardware, but the results could be substantial.
What Are The Compromises?
There are a couple of drawbacks to using AOT compilation, but they are negligible compared to the advantages. To begin with, fully compiled machine code will usually consume more storage space than that of bytecode. This is because each symbol in bytecode is representative of several instructions in machine code. Of course, the increase in size isn't going to be particularly significant, not usually more than 10%-20% larger. That might sound like a lot when APKs can get pretty large, but the executable code only makes up a fraction of the size in most apps. For example, the latest Google+ APK with the new video editing features is 28.3 MB, but the code is only 6.9 MB. The other likely notable drawback will come in the form of a longer install time for apps - the side effect of performing the AOT compilation. How much longer? Well, it depends on the app; small utilities probably won't even be noticed, but the more complex apps like Facebook and Google+ are going to keep you waiting. A few apps at a time probably won't bother you, but converting more than 100 apps when you first switch to ART is a serious test of patience. This isn't entirely bad, as it allows the AOT compiler to work a little harder to find even more optimizations than the JIT compiler ever had the opportunity to look for. All in all, these are sacrifices I'm perfectly happy to make if it will bring an otherwise more fluid experience and increased battery life.
Overall, ART sounds like a pretty amazing project, one that I hope to see as a regular part of Android sooner rather than later. The improvements are likely to be pretty amazing while the drawbacks should be virtually undetectable. There is a lot more than I could cover in just this post alone, including details on how it works, benchmarks, and a lot more. I'll be diving quite a bit deeper into ART over the next few days, so keep an eye out!
Special thanks to Bart Tiemersma for his contributions!
Credits : http://www.androidpolice.com/2013/1...-in-secret-for-over-2-years-debuts-in-kitkat/
Sent from my LG-LS970 using XDA Premium 4 mobile app
Hey everyone, this is a shared page
Version 2.0.0 released!
This version introduces performance tuning, power management control, and an optional MMC I/O queue extension/timing change.
Tested on HTC One Insertcoin 7.0.6, Android 4.4, Sense 5.5
For those of you who have seen reboots / black screens that seem to be caused by Seeder, I suspect it may be due to the power management implemented in previous versions. Disabling power management (by unchecking "Suspend RNG service while screen off") may help. In my testing, battery impact was negligible (less than 2% per 24h).
The performance profiles are Light, Moderate, and Aggressive, and they control how frequently rngd wakes. The default configuration (Light) is unchanged from previous versions. Moderate and Aggressive may impact battery life (slightly), but may also help on devices where the entropy pool is drained quickly and often.
Last but not least, the "Extend I/O queue" option increases the nr_requests on MMC devices to 1024, and increases the dirty page expiry time, allowing more outstanding writes to accumulate. This may allow the I/O scheduler to make better decisions and combine more writes; some users have reported an improvement under heavy I/O.
Feedback appreciated!
---
On some (older) versions of Android, the JVM (and other components) often read random data from the blocking /dev/random device. On newer builds, this problem has been solved, yet depletion of the input entropy pool still seems to slow devices.
So, I cross-compiled rngd, and used it to feed /dev/urandom into /dev/random at 1 second intervals.
Result? Significant lag reduction (for some people)!
Note - if you want to try it, you must be running a rooted device, and you only need to install one of the APKs (latest version is best). Then, just open it, and turn it on. The other files (patches / .zips) are intended for recompiling, packaging, and init.d integration. If you uninstall the app, either turn off rngd first (open, and click the on/off button), or reboot afterwards; the UI does not presently kill the daemon on uninstallation.
For more information on using the .zip flashing method, see Ryuinferno's post here:
http://forum.xda-developers.com/show...postcount=1924
FAQ
Q: Do I need the .apk or the .zip?
A: The easiest method is simply installing the latest .apk, attached below. You do not need to use the patch or the .zip file.
Q: What is the patch for?
A: The patch file contains the source differences needed to recompile the Seeder version of the rngd binary. You only need it if you want to recompile rngd yourself.
Q: What is the .zip file for?
A: The .zip file contains the latest rngd binary. It is intended for ROM builders or those who want to build their own CWMR packages.
Q: Seeder keeps shutting down! Does this mean I have to restart it?
A: The Seeder UI is only used to configure and start/stop the RNG service, which runs in the background. The RNG service is not visible from Android, since it is a native Linux process. You can terminate the UI at any time, and the service will continue running.
Q: Does seeder cause excessive battery drain?
A: Seeder 1.2.7 introduced an RNG service power-saving mode. The process automatically suspends whenever the screen is off. The code is actually in the rngd native binary, so suspend/resume events happen independently of the UI; you can see it in action by attaching to the running process with strace. This means that battery drain while the screen is off is highly unlikely.
While the screen is on, the RNG service simply polls a file descriptor every second, and, when needed, injects a small amount of random data into /dev/random (and calls an ioctl). It's unlikely that this would present enough load to trigger a CPU governor state change at 10mhz (let alone 200mhz), so it shouldn't impact battery life. Having said that, I have received sporadic reports that it does reduce battery life on some devices. They may be coincidental (other software installed at the same time), or due to extra device use while testing. Or, they may be real. If you think your battery drain has increased, shoot me a PM!
Q: How can I see the RNG service Linux process?
A: In a terminal, type: ps | grep rngd
Q: How do I uninstall the .apk?
A: Launch Seeder, and stop the RNG service. Then, uninstall the app as you normally would. Alternatively, uninstall the app, and reboot.
Q: Is seeding /dev/random with /dev/urandom safe?
A: Seeding /dev/random with PRNG-derived data does reduce the quality of its random data. However, it's worth noting that nearly all major OSes except Linux do this. Linux is one of the very few to offer a blocking RNG device. And, at least as of ICS, Dalvik doesn't even read /dev/random, so there is little difference anyway.
Updates
There has been a lot of controversy about Seeder/rngd. In newer versions of Dalvik, nothing touches /dev/random, and yet many users (including myself) still notice a lag reduction. There are theories ranging from kernel lock contention to UI polling load when crediting the entropy pool to simply kicking the governor. And many who believe it's all placebo. I'm trying my best to figure out what exactly is happening, and others are as well.
Someone asked how I arrived at the conclusion I did when I started the thread back in November, and I posted this; I think it might be better served here:
A while back one of the webapps I was hosting on Tomcat (server-side) was experiencing some inexplicable latency and while stracing java I saw it frequently hanging on read()'s from /dev/random. I checked the available entropy, and it was constantly under 250 or so. It was a VM, no HWRNG, so I decided to use rngd to push urandom->random.
Dropped session creation times under load from 5-10 seconds to less than a second.
It's worth noting that Linux is one of very few OSes that have a blocking RNG device. Free/OpenBSD, Windows, etc.. essentially only provide urandom. It's generally considered secure, even for long-term crypto keys, so long as the initial seed is big (and random) enough.
Checked on my device, and saw a few processes grabbing /dev/random. /proc/sys/kernel/random/entropy_avail reporting depleted input pool. Figured it was worth a shot, so I rebuilt rngd for arm (with a few patches, linked on first page), and tried it out. It made a significant difference. Posted it up on this thread, and had a lot of positive feedback. Wanted to get into Android development, so figured.. why not wrap a little UI around it. More positive feedback, so I threw it on the market as well.
I had no idea it would take off like this and was shocked when I saw it Thursday morning. I'm in the awkward position now of explaining why it seems to work for some people, and not for others, especially given the fact Dalvik doesn't have references to /dev/random as of ICS. Theories abound, but it looks like it might be an issue of polling the UI for input events when the entropy pool drops (which never happens so long as rngd is running).
I'm doing this as a hobby. I'm a *nix admin by trade, and can only spend time working on this stuff on evenings and weekends, and the last few weeks have been kinda nuts.
I want to stress to everyone that:
a) It doesn't work the way I thought it did on later Android builds, but it does reduce latency for me and many others even on these builds,
b) I'm offering (and always will offer) Seeder for free to everyone on XDA,
c) Like I say in the market description, if anyone has purchased it and it isn't working, PLEASE email me for a refund (and let me know what device you're on if you're willing).
I was one of the first to root the Captivate glide (my first Android phone), and submitted the A2DP bitpool patch; I was active in the n900 community. I hope everyone understands that I'm doing my best here!
I hope the technique proves useful to people, and if there is in fact contention at the kernel level, I hope it's solved so we all benefit.
Version 2.0.0 attached. No changes.
Version 2.0.0b1 attached. New performance profile selector, I/O queue extender, and power saving control. Improved root checking.
Version 1.4.0 attached. Major refactoring. Service control now fully asynchronous.
Version 1.3.1 attached. No changes from 1.3.1-beta.
Version 1.3.1-beta released. New root check method during ANR-sensitive code.
Version 1.3.0 attached. Proper IntentServices for process control, and notification on upgrade / loss of root / autostart failure.
Version 1.2.9 attached. Yet another update to the upgrade/autostart code.
Version 1.2.8 attached. Asynchronous startup of rngd during boot; this should solve the remaining autostart problems some users have reported.
Version 1.2.7 released. This version introduces a much more efficient suspend-on-sleep mode for rngd.
Version 1.2.6 released. This version reverts the suspend-on-sleep rngd change which may have been contributing to new latency. I'm sorting out a better way of implementing it.
Version 1.2.5 released. This version should fix the autostart failure some users have seen.
Version 1.2.4 released. This version implements a progress bar displaying your currently available entropy, as well as automatic rngd restart on upgrade.
Version 1.2 released. This version implements rngd suspend-on-sleep, and contains minor user interface updates, more robust process and superuser checks, and a new icon (thanks Nathanel!)
Version 1.1 released. This version uses the release signature, so you will need to uninstall the old XDA version first!
This version fixes the issue some users were seeing on later Jellybean ROMs, where the UI would misreport the RNG service status.
Caveats
There is a (theoretical) security risk, in that seeding /dev/random with /dev/urandom decreases the quality of the random data. In practice, the odds of this being cryptographically exploited are far lower than the odds of someone attacking the OS itself (a much simpler challenge). It's worth noting that as of ICS, Dalvik uses /dev/urandom exclusively, anyway, and that Linux is one of very few modern operating systems that even offer a blocking RNG device to begin with.
Support for rngd suspend-on-sleep was added to Seeder 1.2. It should no longer impact battery life while the device is asleep.
There has been a large amount of speculation on why/if this actually improves performance on ICS+ devices. I'm continuing to investigate and will post updates to this thread.
If you try it, let me know how it goes.
ROM builders - feel free to integrate this into your ROMs (either the .apk / application, or just the rngd binary called from init.d)!
If anyone's interested, I've launched a paid app on the Play store for non-xda users. As I add features I'll post the new versions here as a thanks to you guys (and xda community at large for being such a great resource). But if anyone's interested in the market's auto-update feature, just thought I'd mention it.
Cheers!
Thread original => http://forum.xda-developers.com/showthread.php?t=1987032
Trying it right away
Sent from my HTC One using Tapatalk
Noticed some changes, thanks for this :3
Sent from my HTC One using Tapatalk
How about posting from the very top that this is a shared page, not your work/typing and not copy/paste as if you did it yourself...
Other members clearly indicate a share or not.
Dude, why not just link this to lambgx02 The developers Page and tell everyone that it works for you. Did he give you permission to post his work since it is a paid app on Google Play? Not copy and paste his work. https://play.google.com/store/search?q=seeder 2.0&hl=en
Version 2.0 came out a year ago. There has been no updates since
Thread Closed - Original project HERE.