RunUO Community

This is a sample guest message. Register a free account today to become a member! Once signed in, you'll be able to participate on this site by adding your own topics and posts, as well as connect with other members through your own private inbox!

I came across this...

I came across this...

Alrighty, this is what I saw:

http://www.codeguru.com/forum/showthread.php?t=365154 said:
Hi all,

Was reading a book today about data types in C# and have a few burning and itchy questions. They're fundamental concepts, but it's more to the details(Computer Science level).

1) Consider the following : Code:

int n, a, b;​


and Code:

int n;
int a;
int b;​


It's said that the 1st style above will cost some performance loss. Is it true?(even though I know it won't be noticeable)

2) Is it that a long can store the same amount of digits as a double(both are 8 bytes large), only that a double can store decimal digits as well?

3) In C#, a float is 4 bytes, a double is 8 bytes and a decimal is 16 bytes. The decimal data type stores less digits than a float or a double, but how can it take up 16 bytes???!!!

4) A bool requires 1 byte of memory(8 bits). Since we all know that the bool data type can store only a TRUE(1) or a FALSE(0), shouldn't it take up just 1 bit instead of 1 byte? After all, the value 1(TRUE) or 0(FALSE) can be stored in 1 bit, and doesn't require 8 bits(1 byte).

5) Code:
System.Int32 m_myvar = 500;​


and Code:

int m_myvar = 500;​


When this program is run under the .Net(CLR) enviroment, will the 1st style execute slightly faster(even if un-noticeable) than the 2nd style, because we're using the .Net data type directly?(something like calling the Win32 API directly instead of wrappers like MFC)

6) Code:
float my_var = 1.35F;​


I saw the above code in the book today. Is there a real need to add the 'F' behind to tell the compiler that this is a float? I mean, any advantages or practicality reasons behind this?

Thanks!
Xeon.



I was wonder about two things, one is the answers to the above. The forums I found it at did answer a few, but yea. I thought of another thing...

Code:
if( ((from != null) || (from.Map != Map.Internal)) || (!from.Deleted) )
{
    //code
}

or

if( from != null )
{
    if( from.Map != Map.Internal )
    {
        if( !from.Deleted )
        {
            //code
        }
    }
}

What one of those two would be faster? For me, the second looks more organized. But both end up accomplishing the exact same task. Is there actually any difference in performance? Or anything else for that matter?
 

mordero

Knight
I believe that if you have "Optimize code" check in the properties that those ifs will compile to the same MSIL code.

And I thought int was the same as System.Int32 on 32 bit systems and the same as System.Int64 on 64 bit systems with no performance loss or gain by using either...
 
mordero;650094 said:
I believe that if you have "Optimize code" check in the properties that those ifs will compile to the same MSIL code.

And I thought int was the same as System.Int32 on 32 bit systems and the same as System.Int64 on 64 bit systems with no performance loss or gain by using either...
Sooo... They're still the same? I suppose it'd be easier to understand (but more lines... boo).

Well they stated that the System.Int32 compared to int earned no gain in performance, but the website I found that little chat there at is very interesting. CodeGuru Forums - 6 C# fundamental questions
 

mordero

Knight
My understand was that int was just a language keywrod and when a compiler in a 32-bit OS sees int, it actually uses System.Int32, and a 64-bit OS actually uses System.Int64.

Oh and for that
Code:
int a, b, c;
vs
Code:
int a;
int b;
int c;

Those are the same, the only time this might make a difference is when you are compiling, but the same IL code should be generate.

And it looks like the first two replies to that post are correct in what they say...
 

noobie

Wanderer
No, it doesnt. you need to use explicitly 64 bit integers.

System.Int16 = short
System.Int32 = int
System.Int64 = long
 

Ray

Sorceror
mordero;650649 said:
hmmm whoops yeah, i just remembered reading that somewhere. My bad
Could have been a book on c/c++, where the size of an 'int' depends on the compiler. In .Net, they're all fixed size. However, a real change happens to all pointers (and references) as they have to address the whole 64bit scope.




IHaveRegistered;650067 said:
Code:
if( ((from != null) || (from.Map != Map.Internal)) || (!from.Deleted) )
{
    //code
}

or

if( from != null )
{
    if( from.Map != Map.Internal )
    {
        if( !from.Deleted )
        {
            //code
        }
    }
}

What one of those two would be faster? For me, the second looks more organized. But both end up accomplishing the exact same task. Is there actually any difference in performance? Or anything else for that matter?

Those 2 statements arent doing the same thing, cuz the first one would cause an exception if from is null. A OR concatination always checks all statements until one is true. The second one uses AND, which checks all statements until one is false (Only execute code if all checks succeeded).
However, if both were AND, this would probably lead to the same performance as it does the same thing (assuming there are no variables that run out of scope). Away from that, you'll find better ways to improve performance than comparing two if's against each other. :D


-----
1: not sure about that, but under the hood, this should be compiled to just the same code afaik.
2: No, double stores less digits. An integral value like long is stored 'as is'. A floating point value is stored with a mantissa and an exponent to its base. These types are also called 'approximate numbers', as they have a limited precision. You can use double to store much higher values than with int64, but they are of limited precision. Only the higest 16 digits of a number are stored here.
3:decimal has another base. Unlike the other types, it's 10 based, double is 2 based, leading to those wonderful outputs like 0.8 - 0.1 = 0.699999~. And it's not an approximate number. A decimal stores more 'digits' than a double, but less on the left side of the point, it's more precise which is really importent if you're calculating with, say, currencies.
4:1byte is the smallest chunk of memory a processor can handle. Eheres nothing more to say about this, its a technology restriction. Except of: .Net usually pads even 8bit memory to 32bit.
5: No, this is just the same IL code. A difference with this 2 statements is most likley of inaccurate testing - you can't test this only with one run, as other processes on your system can interfer with the test.
6: Try to run it without the F. It won't compile. The F is there to tell the compiler that this constant is a float-type. Without it, the compiler assumes it to be a double and it cant implicitly convert it to float.
 
Code:
if( ((from != null) || (from.Map != Map.Internal)) || (!from.Deleted) )
{
    //action code
}
and
Code:
if( from != null )
{
    if( from.Map != Map.Internal )
    {
        if( !from.Deleted )
        {
            //action code
        }
    }
}
are two completely different things.
The first one performs the action code if: from is not null, or if map is not internal or if from is not deleted.

However, the second one:
If from isn't null check if the map is internal, if not: check if from is not deleted.

For both of them to be the same the first one needs to be changed to:
Code:
if((from != null) && (from.Map != Map.Internal) && (!from.Deleted))
{
//action code
}

-Storm
 

mordero

Knight
yeah storm and ray are right. I didnt even look at them cause i knew the kind of question you were asking (There was a thread that went through an argument dealing with them) and what you wanted to know.
 

Jeff

Lord
Considering the 1st part of your thread here, the int vs Int32. What I find weird is they are treated the same when you output their Type...

Take the following code

Code:
using System;
using System.Collections.Generic;
using System.Diagnostics;

namespace SpeedTesting
{
	class Program
	{
		static Stopwatch _stopWatch;
		static void Main(string[] args)
		{
			_stopWatch = new Stopwatch();

			for( int b = 0; b < 15; b++ )
			{
				_stopWatch.Start();
				for( int i = 0; i < int.MaxValue; i++ )
				{
					int a = 0;
				}

				_stopWatch.Stop();

				Output(typeof(int), int.MaxValue, _stopWatch.ElapsedMilliseconds);

				_stopWatch.Reset();

				_stopWatch.Start();
				for( int i = 0; i < int.MaxValue; i++ )
				{
					Int32 a = 0;
				}
		
				_stopWatch.Stop();

				Output(typeof(Int32), int.MaxValue, _stopWatch.ElapsedMilliseconds);

				_stopWatch.Reset();
				Console.WriteLine();
			}

			Console.ReadLine();
		}

		private static void Output(Type type, int times, long milliSeconds)
		{
			Console.WriteLine("Called ({0}) {1} times in {2} milliseconds", type, times, milliSeconds);
		}
	}
}

You will notice that my output of the variable is by Type.

However, here is the output of the application...

Code:
Called (System.Int32) 2147483647 times in 6729 milliseconds
Called (System.Int32) 2147483647 times in 6627 milliseconds

Called (System.Int32) 2147483647 times in 6691 milliseconds
Called (System.Int32) 2147483647 times in 6666 milliseconds

Called (System.Int32) 2147483647 times in 6715 milliseconds
Called (System.Int32) 2147483647 times in 6665 milliseconds

Called (System.Int32) 2147483647 times in 6668 milliseconds
Called (System.Int32) 2147483647 times in 6591 milliseconds

Called (System.Int32) 2147483647 times in 6597 milliseconds
Called (System.Int32) 2147483647 times in 6542 milliseconds

Called (System.Int32) 2147483647 times in 6573 milliseconds
Called (System.Int32) 2147483647 times in 6551 milliseconds

Called (System.Int32) 2147483647 times in 6764 milliseconds
Called (System.Int32) 2147483647 times in 6665 milliseconds

Called (System.Int32) 2147483647 times in 6677 milliseconds
Called (System.Int32) 2147483647 times in 6641 milliseconds

Called (System.Int32) 2147483647 times in 6620 milliseconds
Called (System.Int32) 2147483647 times in 6762 milliseconds

Called (System.Int32) 2147483647 times in 6766 milliseconds
Called (System.Int32) 2147483647 times in 6639 milliseconds

Called (System.Int32) 2147483647 times in 6641 milliseconds
Called (System.Int32) 2147483647 times in 6629 milliseconds

Called (System.Int32) 2147483647 times in 6678 milliseconds
Called (System.Int32) 2147483647 times in 6624 milliseconds

I cut it short cause i didnt wanan run it 15 times nor did i need to after just these few results. You can see that indeed Int32 is slightly faster then the standard int. But still the fact that its refered to as the same Type is really weird.

Now just to make sure there wasnt some odd fluke since i called int first and Int32 second, i reversed the statements as follows

Code:
using System;
using System.Collections.Generic;
using System.Diagnostics;

namespace SpeedTesting
{
	class Program
	{
		static Stopwatch _stopWatch;
		static void Main(string[] args)
		{
			_stopWatch = new Stopwatch();

			for( int b = 0; b < 15; b++ )
			{
				_stopWatch.Start();
				for( int i = 0; i < int.MaxValue; i++ )
				{
					Int32 a = 0;
				}

				_stopWatch.Stop();

				Output(typeof(int), int.MaxValue, _stopWatch.ElapsedMilliseconds);

				_stopWatch.Reset();

				_stopWatch.Start();
				for( int i = 0; i < int.MaxValue; i++ )
				{
					int a = 0;
				}
		
				_stopWatch.Stop();

				Output(typeof(Int32), int.MaxValue, _stopWatch.ElapsedMilliseconds);

				_stopWatch.Reset();
				Console.WriteLine();
			}

			Console.ReadLine();
		}

		private static void Output(Type type, int times, long milliSeconds)
		{
			Console.WriteLine("Called ({0}) {1} times in {2} milliseconds", type, times, milliSeconds);
		}
	}
}

and sure enough it was a fluke, the seconds statement is still called faster then the first, but not everytime... Odd eh?

Code:
Called (System.Int32) 2147483647 times in 6768 milliseconds
Called (System.Int32) 2147483647 times in 6658 milliseconds

Called (System.Int32) 2147483647 times in 6660 milliseconds
Called (System.Int32) 2147483647 times in 6648 milliseconds

Called (System.Int32) 2147483647 times in 6765 milliseconds
Called (System.Int32) 2147483647 times in 6651 milliseconds

Called (System.Int32) 2147483647 times in 6679 milliseconds
Called (System.Int32) 2147483647 times in 6672 milliseconds

Called (System.Int32) 2147483647 times in 6686 milliseconds
Called (System.Int32) 2147483647 times in 6693 milliseconds

Called (System.Int32) 2147483647 times in 6622 milliseconds
Called (System.Int32) 2147483647 times in 6659 milliseconds

Called (System.Int32) 2147483647 times in 6763 milliseconds
Called (System.Int32) 2147483647 times in 6632 milliseconds

Called (System.Int32) 2147483647 times in 6565 milliseconds
Called (System.Int32) 2147483647 times in 6649 milliseconds

Called (System.Int32) 2147483647 times in 6687 milliseconds
Called (System.Int32) 2147483647 times in 6736 milliseconds

Called (System.Int32) 2147483647 times in 6629 milliseconds
Called (System.Int32) 2147483647 times in 6598 milliseconds

Called (System.Int32) 2147483647 times in 6677 milliseconds
Called (System.Int32) 2147483647 times in 6745 milliseconds

Called (System.Int32) 2147483647 times in 6617 milliseconds
Called (System.Int32) 2147483647 times in 6594 milliseconds

I find this odd and interesting :)
 
Ray;651106 said:
However, if both were AND, this would probably lead to the same performance as it does the same thing (assuming there are no variables that run out of scope). Away from that, you'll find better ways to improve performance than comparing two if's against each other. :D


-----
1: not sure about that, but under the hood, this should be compiled to just the same code afaik.
2: No, double stores less digits. An integral value like long is stored 'as is'. A floating point value is stored with a mantissa and an exponent to its base. These types are also called 'approximate numbers', as they have a limited precision. You can use double to store much higher values than with int64, but they are of limited precision. Only the higest 16 digits of a number are stored here.
3:decimal has another base. Unlike the other types, it's 10 based, double is 2 based, leading to those wonderful outputs like 0.8 - 0.1 = 0.699999~. And it's not an approximate number. A decimal stores more 'digits' than a double, but less on the left side of the point, it's more precise which is really importent if you're calculating with, say, currencies.
4:1byte is the smallest chunk of memory a processor can handle. Eheres nothing more to say about this, its a technology restriction. Except of: .Net usually pads even 8bit memory to 32bit.
5: No, this is just the same IL code. A difference with this 2 statements is most likley of inaccurate testing - you can't test this only with one run, as other processes on your system can interfer with the test.
6: Try to run it without the F. It won't compile. The F is there to tell the compiler that this constant is a float-type. Without it, the compiler assumes it to be a double and it cant implicitly convert it to float.
For the giant block, yeh. I meant to put "&&" and not the "or" 's. :\

It all seemed pretty interesting, I never thought to stop and think about performance from doing the same thing in such small amounts of ways (as shown in my first post, it's a quote from another site...).

Mordero said:
yeah storm and ray are right. I didnt even look at them cause i knew the kind of question you were asking (There was a thread that went through an argument dealing with them) and what you wanted to know.
Yeh, I wasn't paying much attention when I cooked up the random code :rolleyes:

What thread?
 
mordero;651251 said:
It had to do with the way if statements worked and how combining all the ifs into one line (like your example) would throw an exception when the object is null...

let me look for it...

Edit: I think this was it http://www.runuo.com/forums/script-support/78308-nullchecking-safe.html

but it actually dealt with a problem from another thread (which i think this thread has a link to it), but we argued the point in this one
I see, heated discussion on it too. I guess I'll use both, depending on my mood :rolleyes:

I never knew that the "is" statement checked if it were null first... Like if monster were some random monster, and it were deleted and then this statement gets thrown: if( monster is PlayerMobile ) it still wont throw a crash. I've always been checking if it's null before doing something like that! lol. Bleh! :p Well, now I know one more thing! :)

Oh well, very good info that one is... actually this entire discussion is full of good, and highly interesting information.

PS. Great program Jeff, kind of makes one wonder about it even more-so too!
 

milt

Knight
@Jeff

I can't be completely sure on this, but my guess to as why int and Int32 are both coming out as System.Int32 is because they are actually the same thing.
I believe that int is just an alias for Int32, probably to keep some of the feel of C++. I bet that when the compiler generates the IL code for int, it is the same as Int32.

Matter of fact, I just tested it out now and this is what I wrote:

Code:
using System;

class Program
{
	static void Main( string[] args )
	{
		Int32 a = 0;
		int b = 0;

		Console.WriteLine( "{0}, {1}", a, b );
	}
}

Now what I did is compile that, and then use reflector on it. Reflector gives me the following:

Code:
using System;

internal class Program
{
    private static void Main(string[] args)
    {
        int num1 = 0;
        int num2 = 0;
        Console.WriteLine("{0}, {1}", num1, num2);
    }

}

AH-HA! See? It appears that the Intermediate Language generator does in fact generate the same code for both int and Int32.

Now... as for the timing differences. I think that on the scale of only 15 iterations, the output could seem a little fishy. Consider the following:

Code:
using System;
using System.Collections.Generic;
using System.Diagnostics;

namespace SpeedTesting
{
	class Program
	{
		static Stopwatch _stopWatch;
		static List<long> _intClocks;
		static List<long> _32Clocks;

		static void Main( string[] args )
		{
			_stopWatch = new Stopwatch();
			_intClocks = new List<long>();
			_32Clocks = new List<long>();

			for ( int b = 0; b < 100; b++ )
			{
				_stopWatch.Start();
				for ( int i = 0; i < 100000000; i++ )
				{
					int a = 0;
				}

				_stopWatch.Stop();
				_intClocks.Add( _stopWatch.ElapsedMilliseconds );

				_stopWatch.Reset();

				_stopWatch.Start();
				for ( int i = 0; i < 100000000; i++ )
				{
					Int32 a = 0;
				}

				_stopWatch.Stop();
				_32Clocks.Add( _stopWatch.ElapsedMilliseconds );

				_stopWatch.Reset();
			}

			DumpLists();

			Console.WriteLine( "done" );
			Console.ReadLine();
		}

		static void DumpLists()
		{
			using ( System.IO.StreamWriter op = new System.IO.StreamWriter( "output.log", true ) )
			{
				op.WriteLine( "Int32 vs. int:\n" );

				long intTotal = 0;
				long i32Total = 0;

				for ( int i = 0; i < _intClocks.Count; i++ )
				{
					intTotal += _intClocks[i];
					i32Total += _32Clocks[i];
				}

				op.WriteLine( "Average time for int: {0}", intTotal / _intClocks.Count );
				op.WriteLine( "Average time for Int32: {0}\n", i32Total / _32Clocks.Count );

				op.WriteLine( "//++++++++++++++++ int ++++++++++++++++//" );
				for ( int i = 0; i < _intClocks.Count; i++ )
					op.WriteLine( _intClocks[i] );

				op.WriteLine( "//++++++++++++++++ Int32 ++++++++++++++++//" );
				for ( int i = 0; i < _32Clocks.Count; i++ )
					op.WriteLine( _32Clocks[i] );
			}
		}
	}
}

This does some timing, but based on the scale of 100 times, rather than 15.

Here is my output.log:

Code:
Int32 vs. int:

Average time for int: 89
Average time for Int32: 88

//++++++++++++++++ int ++++++++++++++++//
164
88
86
86
86
86
86
85
90
89
89
88
88
88
85
111
120
118
86
88
85
91
90
87
84
84
88
89
87
87
87
90
86
91
112
155
90
85
86
82
88
88
83
85
89
88
84
88
87
86
89
91
84
85
86
89
84
88
91
85
88
89
84
83
88
85
87
88
85
88
87
84
84
88
87
84
91
86
88
86
90
90
87
89
87
87
86
85
86
86
87
87
85
87
86
86
91
90
84
86
//++++++++++++++++ Int32 ++++++++++++++++//
108
87
88
84
87
87
91
88
89
89
90
86
84
86
88
118
117
86
86
84
86
88
86
86
87
87
88
86
88
87
87
86
86
86
162
160
86
89
88
85
83
83
87
87
87
83
85
87
85
86
86
85
84
86
87
89
85
85
84
87
86
85
86
86
89
86
86
90
90
83
86
87
85
85
86
88
84
88
86
86
87
88
88
90
88
89
86
86
88
85
90
85
84
84
88
87
85
85
86
86

As the number of times tested increases, the averages come pretty close together.

Do you agree?
 

Ray

Sorceror
Jeff;651229 said:
the seconds statement is still called faster then the first, but not everytime... Odd eh?
This is because you're working with a multitasking system ;)
There are countless reasons why 2 statements of the same type perform with a slightly different speed.

Like the scheduler. Every thread on your system gets a slice of CPU time in which it can perform instructions. With a look at the TaskMan, you can see that there are hundred of threads working simultaneously. Everyone of them aquires a small slice of CPU time which can lead to a major delay, if the foreign thread does some intensive calculations. You can virtually surpass this, if you set the priority of your own application to 'real time', but this could cause system instability. An endless loop or lock in a realtime application probably forces you to restart your system with the good old power button.

Or the common runtime environment, like the garbage collector.
As you can't control how and when the GC performs the task, it can pretty easily kill such performancetests.


As milt proved, the C# compiler translates every int to an Int32 type, this is because CIL doesnt know a Type 'int', this is just a keyword in c# that refers to the real Int32 class. Same with every other of the so-called built-in types. Even within other languages like VB.Net (Integer)



It's quite an interesting topic how the standard implementation perform against each other, but the real gain of performance lies somewhere else.
There are some rules-of-thumb to use in this very order...
  1. Strip useless code
  2. Optimize your instruction sequence, do things only once
  3. Use fast algorithms where they are needed
  4. Optimize your application design to do things once, not twice
  5. Use unsafe code to do your own pointer arithmetic
  6. Optimize your code again!
  7. Use native methods
  8. switch to a more hardware related language, like C or ASM, but be aware that C can produce slow code as well, if the developer doesnt know how to use it.

IfElse mostly plays a part in this game, so here are some more tipps to them:
  • Selectivity: Check the most selective value first, so you don't have to check more than needed.
  • Simplicity: Try to use (plain) values of a basic (integral) types, like bool, integral numbers, enumerations or even references and pointers.
  • Avoidance: If you know a way to avoid em that doesnt make the program unreadable, us it. Learn how to use design patterns, they will help you with that.
This' just off mind, perhaps there are even more :rolleyes:
 
Top