Related
Hi
Which is better? Tegra3 or Mali 400
I don't know mate, this is what my phone after the update is capable of now.
Sent from my HTC One X using xda premium
Well it will be a race for sure
Mali might be faster (or maybe not), but Tegra 3 definitely better. Because it has better, enhanced games. Developers develop for Tegra. They don't develop for Mali or Adreno.
One guy complained that Shadowgun looks better on my phone than on his iPad3 - I had to explain that I'm running THD version, that we have those Tegra enhanced games. That makes a difference.
Tegra 3 will run all games. Adreno/Mali will require Chainfire3D with plugins to run Tegra games.
Thats my view on that.
The Mali 400 is old now, it`s not what the sg3 is getting surely.
John.
Even if SGS3 will get Mali T-604, I will stick with Tegra 3 for now. Unless I see games dedicated for T-604, and more than just one.
more...
http://www.engadget.com/2012/04/20/galaxy-s-iii-leak/
according to this it will have the 400
antipesto93 said:
http://www.engadget.com/2012/04/20/galaxy-s-iii-leak/
according to this it will have the 400
Click to expand...
Click to collapse
Didn't notice it mention 400. But if true people would find it disappointing, even that 400 is still serious piece of hardware. Given 720p screen, performance would be worse compared to SGS2.
The Mali's performance is the same as the Tegra 3's in graphics benchmarks I've done on my Note Vs my Prime and my One X (just goes to show how average the Tegra 3 GPU really is I think, no better than something at least 6 months older). Disappointing it's not the upgraded GPU if that is accurate, but doesn't differentiate the products at all.
Tinderbox (UK) said:
The Mali 400 is old now
Click to expand...
Click to collapse
Technically, so is Teg3. S4 uses 28nm and the 4212 uses 32nm. Teg3 is two 45nm A9 chips glommed together because Nvidia wanted to be first to market with a next-gen chip. It's the least advanced of any of the three SoCs. From a GPU perspective none of the three really move the ball forward and are just evolutionary vs. revolutionary. If I had to guess best overall performance I’d say 4212, Teg3, and S4 in that order. Because S4 and the 4212 are on smaller dies they’ll be more efficient and handily beat Teg3 at battery life (except maybe at idle).
delete post.
BarryH_GEG said:
Technically, so is Teg3. S4 uses 28nm and the 4212 uses 32nm. Teg3 is two 45nm A9 chips glommed together because Nvidia wanted to be first to market with a next-gen chip. It's the least advanced of any of the three SoCs. From a GPU perspective none of the three really move the ball forward and are just evolutionary vs. revolutionary. If I had to guess best overall performance I’d say 4212, Teg3, and S4 in that order. Because S4 and the 4212 are on smaller dies they’ll be more efficient and handily beat Teg3 at battery life (except maybe at idle).
Click to expand...
Click to collapse
tegra3 is actually made on the 40nm. nvidia still has tsmc's 40nm process and is migrating towards 28nm with desktop GPUs and will eventually migrate to 28nm with the tegra3+.
i hate how people always say that its a bad thing that apple didn`t upgrade the gpu but fust added more cores or samsung didn`t change the mali 400 gpu. the fact is that the mali and sgx543mp2 were ahead when they were released. now there is actual competition like the adreno 320 and tegra 3/4. a simple overclocked sgx or mali chip is enough to keep up with the competition.
NZtechfreak said:
The Mali's performance is the same as the Tegra 3's in graphics benchmarks I've done on my Note Vs my Prime and my One X (just goes to show how average the Tegra 3 GPU really is I think, no better than something at least 6 months older). Disappointing it's not the upgraded GPU if that is accurate, but doesn't differentiate the products at all.
Click to expand...
Click to collapse
Mali 400/450 is a 2nd generation GPU like tegra 2, only 44 millions polygons/sec, My Adreno 205 is 41 millions & The Tegra 3 is 129 millions.
Gameloft games in the end of 2012 will need 100 millions...
The Mali 3rd generation is Mali T-604/640 & Mali say that's it is 500% the performances of previous Mali GPU's :
http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
500% is using quad-core optimised applis (only tegra 3 will have it in less than 2 years) but it's 250% in dual-core...
As Tegra 3 is equal to T-604, Mali 400 is pawned...
-1st gen (Adreno 200, mali 200/300, SGX Power VR 520/530 & tegra 1)
-2nd gen (Adreno 205, Mali 400MP/450MP, SGX Power VR 540/554 & tegra 2)
-3rd gen (Adreno 220/225/320, Mali T604/640, SGX Power VR G 6200/6430 & Tegra 3)
Sekhen said:
Mali 400/450 is a 2nd generation GPU like tegra 2, only 44 millions polygons/sec, My Adreno 205 is 41 millions & The Tegra 3 is 129 millions.
Gameloft games in the end of 2012 will need 100 millions...
The Mali 3rd generation is Mali T-604/640 & Mali say that's it is 500% the performances of previous Mali GPU's :
http://www.arm.com/products/multimedia/mali-graphics-hardware/mali-t604.php
500% is using quad-core optimised applis (only tegra 3 will have it in less than 2 years) but it's 250% in dual-core...
As Tegra 3 is equal to T-604, Mali 400 is pawned...
-1st gen (Adreno 200, mali 200/300, SGX Power VR 520/530 & tegra 1)
-2nd gen (Adreno 205, Mali 400MP/450MP, SGX Power VR 540/554 & tegra 2)
-3rd gen (Adreno 220/225/320, Mali T604/640, SGX Power VR G 6200/6430 & Tegra 3)
Click to expand...
Click to collapse
link to your numbers about Tegra 3?
I have not used any device with Mali 400. Sorry mate~~
I think that tegra 3 is better but we have to attend the 3.x kernel to solve the battery problem properly.
Sent from my HTC One X using xda app-developers app
Mali-400 is good and strong and Tegra 3 might not be the fastest one there is, but it's the only one that gets best looking games. On top of that, Tegra 3 Plus is coming soon and then next year another one with direct x and supposed console-like performance. See what Nvidia does for desktops and just hope they keep the pace with mobile GPU and we will get there too. I don't really consider non-tegra device unless it amazes me with noticeably better power efficiency or optimized games start coming out for it.
Would you buy non-nvidia and non-ati graphics card for your pc?
schriss said:
Mali-400 is good and strong and Tegra 3 might not be the fastest one there is, but it's the only one that gets best looking games. On top of that, Tegra 3 Plus is coming soon and then next year another one with direct x and supposed console-like performance. See what Nvidia does for desktops and just hope they keep the pace with mobile GPU and we will get there too. I don't really consider non-tegra device unless it amazes me with noticeably better power efficiency or optimized games start coming out for it.
Would you buy non-nvidia and non-ati graphics card for your pc?
Click to expand...
Click to collapse
Exactly!
Given the choice, I would buy a Tegra device over anything else.
Gnex has 1.2ghz dual core cortex a9 ti omap 1gb ram Powervrsgx540, 720p hd display
is the specs of the Gnex better than 1.2ghz quad core cortex a7 mediatek 1gb ram powervrsgx544,720p display
From the benchmark perspective, no it's not. Dual A9's usually equal to quad A7's in CPU power, while SGX544MP1 is obviously superior to SGX540. However, the bloatware those small manufacturers tend to put in those MTK devices will obviously slow the phone down. Words around the internet also say that although MT6589 is a quad A7 CPU, only 2 cores are used per normal task.
I suppose you're gonna buy a device - don't buy MTKs, there's usually no development for them, they may never get future Android upgrades even via flashing (because there's no custom ROM at all), meaning that your phone could be dead on arrival. Personal opinion so feel free to oppose.
Sent from Google Nexus 4 @ CM10.2
What about PowerVR 6200 GPU vs Adreno 430?
A good question!
What's actually inside these processors...
Can't post outside link... Its (8) a53 processors clocked at 2.0ghz w/ powervr 6200
The 810 being (4)a53/(4)a57 big.LITTLE combo.
Long story short geekbench says the mediatek wins in multicore barely, but is smashed in single core, because it's a true octacore, but just a midrange one severely overclocked, with last generations gpu running the blinky flashy show.
this is all based on mt6795 not sure what the (t) means...
atomikpunx said:
What's actually inside these processors...
Can't post outside link... Its (8) a53 processors clocked at 2.0ghz w/ powervr 6200
The 810 being (4)a53/(4)a57 big.LITTLE combo.
Long story short geekbench says the mediatek wins in multicore barely, but is smashed in single core, because it's a true octacore, but just a midrange one severely overclocked, with last generations gpu running the blinky flashy show.
this is all based on mt6795 not sure what the (t) means...
Click to expand...
Click to collapse
So, what does this mean in real world use?
Sharpshooterrr said:
So, what does this mean in real world use?
Click to expand...
Click to collapse
Last mediatek proccesors trully provide smoothness, the MTK6795T is just a flashy overclocked MTK6795, yes, it beats the SD810, even the snapdragron 805 beats the 810 in some devices, maybe is because the 810 throttles himself to the oblivion.
MediaTek SOC's are known to have crap embedded security.
Additionally if you think about the ways in which the big.LITTLE architecture works it makes a lot more logical sense than a makeup of 8 cores in a true octa-core setup.
M9+ is out, so we'll see
Is power vr g6430 any good when campared to adreno gpu's?
http://www.gsmarena.com/apple_iphone_5s_vs_lg_g2_vs_nokia_lumia_1020-review-997p5.php
The same GPU used on iPhone 5s. Based on this benchmark, it's better than Adreno 330 I think.
Adreno 405 isn't top class GPU. According to GFLOPS numbers, 405 is better than 1st gen Adreno 320 (S4 Pro, S4 Prime) and weaker 2nd gen.
But all about benchmarks, the most important is user experience and last but not least is optimization
GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
Adreno 405 is only half as power of powervgr g6430
Adreno 405 is middle range gpu
While powervgr g6430, adreno 320, 330, 420 are last year and current flagship gpu
Gpu mostly rated by gflops
http://kyokojap.myweb.hinet.net/gpu_gflops/
And adreno each generation have basic, mid, high power gpu..
Adreno 405 is 4th generation (05 means basic) and can match 3rd Gen mid
Adreno 420 is 4th generation (20 is mid) and can match 3rd Gen high gpu
See gflops of each in that above link
And yes optimization is the most for gaming
The PowerVR G6430 in Zenfone 2 is clocked higher than iphone 5s but lower than ipads and Atom 3570. Its performance is between the Adreno 330 and 430 which is excellent given that it was designed in 2012 and released in 2013. Reclocking it at 640Mhz like its 3570 brother should give a nice run for its price, still technically, it won't be as fast as Adreno 430. However, in real world usage and coupled with a more powerful Intel cpu, it should match it as the CPU is able to extract more GPU power.
If you are really looking at the most powerful mobile GPU, the Nvidia Tegra X1 is at the top, close to twice the performance of the top Qualcomm 810 GPU, Adreno 430. In Antutu, it only scores 75K because the CPU is slower than others like Intel. 75K is still unbreakeable for the moment. Surely, Nvidia and ATI have much more experience in the GPU domain so its not surprising that they are the fastest.
Now, only if ATI partner with Intel to provide us with 14nm goodies :angel:
p.s: To have a broader picture, the Tegra X1 chip is close to twice the performance of a PS3 which is astonishing considering its small size and 2W max power consumption.
Nvidia Shield TV based on Tegra X1 has active cooling system.
So, how it can be compared to phone SoCs?
My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.
aziz07 said:
My bad, I though it was found in the Nvidia Shield tablet. Its its brother the Kepler K1 that is currently used but still at 365 GFlops on nvidia website, it competes with the adreno 430. Note that the PS3 was 192 GFlops.
Interesting fact is that the Tegra X1 actually draws much Less power at idle and slightly less power (1w less than Kepler) at load. Kepler would peak at 11w. Thanks to the new 20nm tech in Maxwell cores efficiency. The Nvidia TV Shield has much more and larger components to power, its also for sure clocked higher.
''According to Nvidia, the power consumption in a tablet powered by Tegra X1 will be on par with Tegra K1. In fact, idle power consumption will be even lower thanks to the various architecture improvements. Tegra K1 was designed to operate at around 5-8 watts, with infrequent peaks up to 11 watts when running stressful benchmarks, so the X1 will be well within the realm of tablet power requirements.'' Source: greenbot.com
Heres this too: http://www.pcper.com/reviews/Processors/NVIDIA-Announces-Tegra-X1-Maxwell-Hits-Ultra-Low-Power
I really like the fact that PC manifacturers enter the mobile market, after all, they were building computer components for ages. This will open the door to more powerfull and cheaper SoCs especially because they have the ability to mass produce and develop the latest tech with many factory plants worldwide.
Click to expand...
Click to collapse
Maxwell can very power hungry when you clock it all the way up, and X1 has more CUDA cores than K1. X1 has 2 SMM with 256 total while K1 only has 1 SMX with 192.
also, pc manufacturers have always been in the mobile market, or you could even say they started the mobile market. for instance, Apple was a pc manufacturer, steve jobs dedicated 70% of his life to PC rather than phones. samsung makes everything and they have a lot of experience too in making notebooks. so the two most powerful (or most successful) players in the mobile sector are also pc manufacturers, what do you mean by pc manufacturers entering the mobile market?
Its getting off topic but Intel or Apple weren't the first one to build a cell phone. Intel was the first company to build a CPU though. Motorola built the 1st cellphone.
On a sidenote, Apple never really built anything except for aesthetics, it started with IBM building for them after non-success with Synertek for a couple of months. Btw, Samsung does not manifacture PC CPUs or GPUs. Only CPU they build is the Exynos for mobile. I think you misinterpreted the fact the they sell laptops, yes they do, but they are not the one building its major components, its Intel and AMD. They may build its memory components but not CPU or GPU.
You are seeing technology the other way around. If we take, let say, a 2 years old gpu and a new one. The new one can have double the transitor and components count yet still consume less power. Its about architechture efficiency and transistor nm. e.g. the Intel in our Zenfone 2 is built with 3D 22nm transistor which is more power efficient. That's how tech flow.
Anyway, apple is slowly declining, Intel is building their PC segment, replacing IBM, and Samsung is building their next iphone and taking care of the mobile segment. We can already see whats next.
I have been building PCs for over 15 years, its my hobby.
@ mods There should be a ''resolved'' button just like other forums so threads don't get cluttered lol
GrandpaaOvekill said:
Is power vr g6430 any good when campared to adreno gpu's?
Click to expand...
Click to collapse
I know benchmarks aren't everything, but GFX gives a good idea of the performance difference between the two. Basically, the PowerVR G6430 is much more powerful than the Adreno 405.
PowerVR G6430:
https://gfxbench.com/result.jsp?ben...VR Rogue G6430&base=device&ff-check-desktop=0
Adreno 405:
https://gfxbench.com/result.jsp?ben...ter=Adreno 405&base=device&ff-check-desktop=0
Here's some videos of a Zenfone 2 with a phone that utilizes the SD 615/Adreno 405 combo
https://www.youtube.com/watch?v=N3DcRHXrTHg
https://www.youtube.com/watch?v=TYZr53U2Tfk
Hope this helps.
I have been using the tools i could find that would give a close look at the X1 as it is implemented in the Pixel C. Best I can tell google decided to use a revision that is reported by AIDA64 as r1p1. The most interesting, and most disappointing to me, aspect of this implementation is that it appears the 4 A53 cores are turned off.?? Can anyone clarify what is happening?
Yeah it looks like it's only a quad core that's why the cpu is weaker but gpu looks the same it has to be to control the heat.
i have been reading early this morning that in some reports it is listed as a quad core with the A53 cores as shadow cores so in reality it is only a quad core instead of eight core. Nvidia documentation is confusing to say the least. http://www.nvidia.com/object/tegra-x1-processor.html
states 8 cores. the TX1 developer kit board states 4 cores, https://developer.nvidia.com/embedded/buy/jetson-tx1-devkit
dkryder said:
I have been using the tools i could find that would give a close look at the X1 as it is implemented in the Pixel C. Best I can tell google decided to use a revision that is reported by AIDA64 as r1p1. The most interesting, and most disappointing to me, aspect of this implementation is that it appears the 4 A53 cores are turned off.?? Can anyone clarify what is happening?
Click to expand...
Click to collapse
The Tegra X1 is based on ARM big.LITTLE Architecture which has Quad A57 cores for performance and Quad A53 cores for power efficiency. The Tegra implementation uses CPU Migration managed at kernel level. When a process is running that doesn't need much raw power, it will run on one of the A53 cores. If the process requires more CPU power (such as when a game changes from menu to gameplay), it can migrate to the A57 core.
I think the info AIDA receives is probably coming from /proc/cpuinfo which generally shows big.LITTLE devices configured with the CPU Migration kernel as Quad core as the kernel scheduler only sees one virtual core for each A53/A57 pair.
The revisions such as r0p0, r1p1, r2p0 etc are targeted at the individual cores, not the whole SoC
Some of Samsung's Exynos devices that implement the ARM big.LITTLE Architecture used an implementation called Heterogeneous multi-processing, which allows all 8 cores to be used at once. I seem to recall this being done as a firmware/kernel revision update (might have been on the Note 3). Not sure we can expect this to happen for the PixelC
skally said:
The Tegra X1 is based on ARM big.LITTLE Architecture which has Quad A57 cores for performance and Quad A53 cores for power efficiency. The Tegra implementation uses CPU Migration managed at kernel level. When a process is running that doesn't need much raw power, it will run on one of the A53 cores. If the process requires more CPU power (such as when a game changes from menu to gameplay), it can migrate to the A57 core.
I think the info AIDA receives is probably coming from /proc/cpuinfo which generally shows big.LITTLE devices configured with the CPU Migration kernel as Quad core as the kernel scheduler only sees one virtual core for each A53/A57 pair.
The revisions such as r0p0, r1p1, r2p0 etc are targeted at the individual cores, not the whole SoC
Some of Samsung's Exynos devices that implement the ARM big.LITTLE Architecture used an implementation called Heterogeneous multi-processing, which allows all 8 cores to be used at once. I seem to recall this being done as a firmware/kernel revision update (might have been on the Note 3). Not sure we can expect this to happen for the PixelC
Click to expand...
Click to collapse
skally,
ok. my previous experience with an octa core, if the X1 is an octa core, is the qualcomm 810 as in the oneplus two. AIDA64 reports 8 A53 cores. So there is no big.LITTLE configuration with the 810. Ran across this from a Nvidia dev forum, seems the A53 are invisible, not turned off as i said above.
https://devtalk.nvidia.com/default/topic/904289/does-anyone-get-8-cpus-listed-/
dkryder said:
skally,
ok. my previous experience with an octa core, if the X1 is an octa core, is the qualcomm 810 as in the oneplus two. AIDA64 reports 8 A53 cores. So there is no big.LITTLE configuration with the 810. Ran across this from a Nvidia dev forum, seems the A53 are invisible, not turned off as i said above.
https://devtalk.nvidia.com/default/topic/904289/does-anyone-get-8-cpus-listed-/
Click to expand...
Click to collapse
The CPU is still big.LITTLE Architecture in both the Tegra X1 and the SD810. Both Nvidia and Qualcomm license and use unmodified ARM cores in their SoC designs. The kernel task scheduler is where the difference lies, the Tegra uses Symmetric Multi Processing/CPU Migration, while the SD810 uses Heterogeneous Multi Processing/global task scheduling
I have no idea why Nvidia don't enable HMP on the Tegra, it is supposed to be even more power efficient.
skally said:
The CPU is still big.LITTLE Architecture in both the Tegra X1 and the SD810. Both Nvidia and Qualcomm license and use unmodified ARM cores in their SoC designs. The kernel task scheduler is where the difference lies, the Tegra uses Symmetric Multi Processing/CPU Migration, while the SD810 uses Heterogeneous Multi Processing/global task scheduling
I have no idea why Nvidia don't enable HMP on the Tegra, it is supposed to be even more power efficient.
Click to expand...
Click to collapse
ok. thanks for the information.