@mrkroket:
it is easy to handle duplicate items with a simple saved flag.
the real problem which can not be handled is this:
a)while some method is executed:
-line xx: give some money to player A from B.
-line xy: give an item to Player B from player A.
lets say after the line xx, both player A and B saved, then line xy is executed..
if you reload the world, you will get a corrupted save in a sense..
about CRC:
it is almost same thing to evaluate that CRC with writing it. they almost take same time.
actually i explained it in my previous reply, what you are saying is actually a kind of hash function for serialization info. (see my previous reply)
@brodockbr: using a db for save is slower than using I/O. on the other hand, using xprevail which is a database persistance layer means you wont use binary data. it could take an hour to save the whole world, trust me on this
So overall, i think the most efficient way to improve current serialization would be:
writing save to memory could be an option, or increasing the buffer size which is only 4KB right now (actually i recommended it in another thread about serialization)
for both these options, another customized buffer could be used. using 20mb simple array buffer might not be a good idea. so you could implement a new buffer which is chunks of array.
it is easy to handle duplicate items with a simple saved flag.
the real problem which can not be handled is this:
a)while some method is executed:
-line xx: give some money to player A from B.
-line xy: give an item to Player B from player A.
lets say after the line xx, both player A and B saved, then line xy is executed..
if you reload the world, you will get a corrupted save in a sense..
about CRC:
it is almost same thing to evaluate that CRC with writing it. they almost take same time.
actually i explained it in my previous reply, what you are saying is actually a kind of hash function for serialization info. (see my previous reply)
@brodockbr: using a db for save is slower than using I/O. on the other hand, using xprevail which is a database persistance layer means you wont use binary data. it could take an hour to save the whole world, trust me on this
So overall, i think the most efficient way to improve current serialization would be:
writing save to memory could be an option, or increasing the buffer size which is only 4KB right now (actually i recommended it in another thread about serialization)
for both these options, another customized buffer could be used. using 20mb simple array buffer might not be a good idea. so you could implement a new buffer which is chunks of array.