TalesinOur own little well of hateJoin Date: 2002-11-08Member: 7710NS1 Playtester, Forum Moderators
edited December 2010
<!--quoteo(post=1816804:date=Dec 17 2010, 03:48 PM:name=Rob)--><div class='quotetop'>QUOTE (Rob @ Dec 17 2010, 03:48 PM) <a href="index.php?act=findpost&pid=1816804"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Java is compiled to what is essentially Java virtual machine machine code. (I actually think those to machines are correct form...) That makes it on par with something like a virtual machine on your computer running Ubuntu and compiling C, not interpreted in the sense that is usually thought of, and I don't think it's fair to make the comparison.<!--QuoteEnd--></div><!--QuoteEEnd--> Yet I do. It requires an interpreter on the VAST majority of machines (I don't count that one ring they had made to promote it), thereby making it interpreted code. Java coders harp on and on, but it still needs an interpreter to run on hardware anyone actually uses day-to-day. Just like Visual Basic can be 'compiled'.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Besides that, if we are to write off progress in memory management at a high language level because it's sloppy<!--QuoteEnd--></div><!--QuoteEEnd--> We have very different ideas of what constitutes 'progress' in memory management. Making it so lazy coders don't have to track their usage and clean up after themself, leading to massive memory holes and constant Full-GC events being a matter of course is just outright SHODDY. Not providing a method for freeing that memory OTHER than the automated collection, or collection-hinting is even more unforgivable.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Next, consider the flexibility of C#'s file system. In C++, a class must be entirely declared within one file - usually a header. The definition of the class, the real source, can be split up any way you want over how many files you want, but you declare part of a class here and the other part over there.<!--QuoteEnd--></div><!--QuoteEEnd--> So... not having to keep your functions small and distinct, and using a hierarchical order to keep everything organized... is a good thing? Somehow? Oh, right. Lazy-ass coders are the target. Forgot for a moment.
Short version, it's two methods of thinking. 'Kick it out the door' shovelware... or old-school, <i>proper</i> coding.
Otherwise known by most crapware coders as being anal-retentive. Funny how the anal retentive guy's code is commented, maintainable, and runs several orders of magnitude faster and more stable. That isn't to say you can't write a competent program with the rapid-development dreck. I'm sure you can, given time and effort to work around the shortcomings of the 'modern' memory management. It's just saying that it <u>encourages</u> crapware.
<!--quoteo(post=1816927:date=Dec 17 2010, 08:10 PM:name=DiscoZombie)--><div class='quotetop'>QUOTE (DiscoZombie @ Dec 17 2010, 08:10 PM) <a href="index.php?act=findpost&pid=1816927"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->I make my living writing half-assed sql and vba that I picked up on the job!<!--QuoteEnd--></div><!--QuoteEEnd-->
As someone who has to then pick up your code later and fix it, I hate you.
<!--quoteo(post=1816952:date=Dec 18 2010, 03:05 AM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 18 2010, 03:05 AM) <a href="index.php?act=findpost&pid=1816952"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Yet I do. It requires an interpreter on the VAST majority of machines (I don't count that one ring they had made to promote it), thereby making it interpreted code. Java coders harp on and on, but it still needs an interpreter to run on hardware anyone actually uses day-to-day. Just like Visual Basic can be 'compiled'.<!--QuoteEnd--></div><!--QuoteEEnd-->
Here's a wikipedia reference: <a href="http://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B" target="_blank">http://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B</a> . Yes, Java is nearly always slower on execution, but the tradeoff is the quicker development provided by common code at a much higher level. Not having to manage memory isn't always a bad thing either. I know from experience that enterprise level applications will get away from you when using pointers - especially when the team doesn't handle them properly. It is nearly impossible to find a buried heap corruption or access violation due to misused pointers. With java or C# this isn't an issue, and you buy this security at the cost of extra memory usage. If a programmer is bad, they will write bad code in whatever language they use - at least with Java, they're bad program might still be useful.
<!--quoteo(post=1816952:date=Dec 18 2010, 03:05 AM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 18 2010, 03:05 AM) <a href="index.php?act=findpost&pid=1816952"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->So... not having to keep your functions small and distinct, and using a hierarchical order to keep everything organized... is a good thing? Somehow? Oh, right. Lazy-ass coders are the target. Forgot for a moment.<!--QuoteEnd--></div><!--QuoteEEnd-->
I'm not sure I understand your meaning. The ability in C# to write partial classes makes both of those things easier. It's surely true that both of those are very important. The example I was talking about here is one from my own experience. We had a whole set of boilerplate functions for our classes in Qt C++ that allowed us to stream them to and from XML and binary. This code was generated with a python script. When a member variable changed, we had to regenerate. If we could have written partial classes (two header files to declare the class, and two source files to define it) then it would have been very simple. gen_class.h, class.h, gen_class.cpp, class.cpp. Any programmer written could would go in class.h, class.cpp. The generator would have full control over gen_class.h and gen_class.cpp. As it was, we had to make the script preserve sections of the header that were programmer written by keying on a particular comment in them. It was very very easy for us to lose work.
<!--quoteo(post=1816952:date=Dec 18 2010, 03:05 AM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 18 2010, 03:05 AM) <a href="index.php?act=findpost&pid=1816952"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Short version, it's two methods of thinking. 'Kick it out the door' shovelware... or old-school, <i>proper</i> coding.
Otherwise known by most crapware coders as being anal-retentive. Funny how the anal retentive guy's code is commented, maintainable, and runs several orders of magnitude faster and more stable. That isn't to say you can't write a competent program with the rapid-development dreck. I'm sure you can, given time and effort to work around the shortcomings of the 'modern' memory management. It's just saying that it <u>encourages</u> crapware.<!--QuoteEnd--></div><!--QuoteEEnd-->
It's funny you describe the trade off, but pick one as the correct course of action. Engineering is all about trade offs, and there are no wrong answers, just answers that work to varying degrees of acceptability. If you build a bridge that is required to hold 200 tons, but you make it hold 600 tons, you have over-engineered the solutions and probably wasted resources.
Not every application written needs to run for years without the system restarting. Many applications only need to run for an hour or less. On today's computers, it could gobble 200K of memory a second for an hour of run time and still perform it's job. If the program does useful work, that's all that matters.
As far as the time and effort required to work around these 'shortcomings,' I'd have to say that's bunk. The whole point of high level languages is to shorten the time required to develop by providing extensive modern libraries. As I described with the TCP/IP example earlier. Memory management is more about making the program maintainable and stable while also reducing the number of possible bugs.
You're right that managing your own memory is tough to learn and master, and it does pay off and it's actually pretty exciting to write something complex that way. This is why I'm a big fan of Qt. I look for Nokia to turn it into a cross platform .Net soon.
TalesinOur own little well of hateJoin Date: 2002-11-08Member: 7710NS1 Playtester, Forum Moderators
<!--quoteo(post=1817303:date=Dec 19 2010, 12:33 PM:name=Rob)--><div class='quotetop'>QUOTE (Rob @ Dec 19 2010, 12:33 PM) <a href="index.php?act=findpost&pid=1817303"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Yes, Java is nearly always slower on execution, but the tradeoff is the quicker development provided by common code at a much higher level. Not having to manage memory isn't always a bad thing either. I know from experience that enterprise level applications will get away from you when using pointers - especially when the team doesn't handle them properly.<!--QuoteEnd--></div><!--QuoteEEnd--> So... your argument is that sloppy coding by idiots who can't manage their own memory is 'okay', because the tool somewhat picks up the slack. This is the entire argument that I am against. Stupid coders need to learn how to manage memory, not switch to a language that allows their idiocy to continue.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->It's funny you describe the trade off, but pick one as the correct course of action.<!--QuoteEnd--></div><!--QuoteEEnd--> Yes, because it's not a tradeoff. It's a right way, and a lazy way.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Engineering is all about trade offs, and there are no wrong answers, just answers that work to varying degrees of acceptability. If you build a bridge that is required to hold 200 tons, but you make it hold 600 tons, you have over-engineered the solutions and probably wasted resources.
Not every application written needs to run for years without the system restarting. Many applications only need to run for an hour or less. On today's computers, it could gobble 200K of memory a second for an hour of run time and still perform it's job. If the program does useful work, that's all that matters.<!--QuoteEnd--></div><!--QuoteEEnd--> This is the kind of thinking that lazy coding makes. Yes, speed of development and 'kicking it out the door' is important to consider as a manager. If you're writing a one-off, I can see hashing something together quickly. If it's something that will be used for a long while, taking the time to do it right is extremely important... the difference between standing on a can of paint to reach something, or going out and getting a damn ladder. Idiots and managers who don't understand why a quick hack can't serve as a day to day mainstay deserve what they get. Everyone else shouldn't suffer for the slipshod BS that stupid coders try to pass off as a 'fix'.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->You're right that managing your own memory is tough to learn and master, and it does pay off and it's actually pretty exciting to write something complex that way. This is why I'm a big fan of Qt. I look for Nokia to turn it into a cross platform .Net soon.<!--QuoteEnd--></div><!--QuoteEEnd--> No thanks on the Qt side. But again, learning to actually manage your own damn memory is an important skill for any coder who isn't just hashing something together, and wants to write <i>proper</i> code.
If you can't tell, I have to run support interference for a very badly-written Java hodgepodge sprawl of suck, written by keyboard-flinging monkeys, that will regularly try to run a full GC loop every twenty seconds... which doesn't allow the last run-through to complete before the next one kicks off. And management won't allow this mission-critical core to be rewritten in C, due to the time and development costs to move it over to a sane model. Yet are getting hammered on why they can't host more than three processes at a time on each server without running out of memory entirely.. when each is running 32GB. So yes, much Java hate. Learn C, and take some of the raving ****wit-ism out of the world.
<!--quoteo(post=1817910:date=Dec 21 2010, 09:48 PM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 21 2010, 09:48 PM) <a href="index.php?act=findpost&pid=1817910"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->So... your argument is that sloppy coding by idiots who can't manage their own memory is 'okay', because the tool somewhat picks up the slack. This is the entire argument that I am against. Stupid coders need to learn how to manage memory, not switch to a language that allows their idiocy to continue.<!--QuoteEnd--></div><!--QuoteEEnd--> No, he's arguing that manual memory management is not always necessary, and that doing it manually when you don't have to is what my old english teacher used to call "idiot work." Furthermore, you don't always need the best coder, you just need someone who's good enough. If coder A and coder B could both write the right code in Java but coder B would mess up his pointers in C, yes I suppose that means that A is better than B. But the fact remains that there's nothing wrong with B's Java code.
If I didn't know better I'd think you were deliberately misconstruing his arguments.
Basically Talesin is bitter (as many hardcore programmers are) about hacks who try to code and fail and then real programmers have to come around and fix their stupidity.
While a large part of it stems from the prior programmers being lazy/incompetent/not knowing enough underlying theory, a small portion is really just because managers are stupid or someone rolls a 1 on their INT check.
So yeah. In some cases Talesin is being a total dork. You don't always have to use C when Java will work just fine (see one-shot code, ugly Perl scripting that "just works" for some parsings and working as glue between heavy processes). On the other hand, core processes that are 10x slower due to someone being stupid and writing garble in Java/Python 'cause they don't know how to manage their memory and assume the compiler will handle it are a real and present danger to the sanity and success of many companies.
It's all about scope. And while Talesin may not be right in all cases, his anger is justified.
His anger isn't justified, because he directs it at programming languages when it should be directed at coders. Someone who can't write proper code in Java isn't magically going to turn into a competent coder just because you have him work in C instead. Rob makes the argument that higher-level programming languages can save time by automating certain tasks - Talesin somehow sees this as Rob defending sloppy coding. That not justified, that's missing the point.
<!--quoteo(post=1818126:date=Dec 21 2010, 06:46 PM:name=lolfighter)--><div class='quotetop'>QUOTE (lolfighter @ Dec 21 2010, 06:46 PM) <a href="index.php?act=findpost&pid=1818126"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->His anger isn't justified, because he directs it at programming languages when it should be directed at coders. Someone who can't write proper code in Java isn't magically going to turn into a competent coder just because you have him work in C instead. Rob makes the argument that higher-level programming languages can save time by automating certain tasks - Talesin somehow sees this as Rob defending sloppy coding. That not justified, that's missing the point.<!--QuoteEnd--></div><!--QuoteEEnd-->
Valid.
Also, on a side note, I have had people who only knew Java and once they learned assembly and C they became startling better. On the other hand, I've seen lots and lots of people get churned out with a CS Major from the universities, I know they had an assembly/C programming class, and their code is still crap.
tankefuglOne Script To Rule Them All...Trondheim, NorwayJoin Date: 2002-11-14Member: 8641Members, Retired Developer, NS1 Playtester, Constellation, NS2 Playtester, Squad Five Blue
I am quite sure you can write bad programs that create bitter programmers in any language regardless of its features.
[WHO]ThemYou can call me DaveJoin Date: 2002-12-11Member: 10593Members, Constellation
<!--quoteo(post=1818126:date=Dec 21 2010, 07:46 PM:name=lolfighter)--><div class='quotetop'>QUOTE (lolfighter @ Dec 21 2010, 07:46 PM) <a href="index.php?act=findpost&pid=1818126"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->His anger isn't justified, because he directs it at programming languages when it should be directed at coders. Someone who can't write proper code in Java isn't magically going to turn into a competent coder just because you have him work in C instead. Rob makes the argument that higher-level programming languages can save time by automating certain tasks - Talesin somehow sees this as Rob defending sloppy coding. That not justified, that's missing the point.<!--QuoteEnd--></div><!--QuoteEEnd-->
Just $0.02 because it appears that you guys are using the same points, but getting mad about what eachother are saying. Maybe just misunderstanding?
A crappy coder is crappy nomatter what language. A program written in C by a crappy coder will probably reveal that he/she is terrible. A program written in something like C#/Java will act as a crutch for crap skill. Sure, it works, but it's masking malignant badness.
It's like breast implants. Nobody actually blames the silicone for being fake. They mostly look and act the part. And they don't directly say anything about their owner. But they <i>imply</i> a lot about their owner.
I think the point Talesin is trying to get across is that there are basically frauds running around convincing people they're the real deal, and without the crutch they wouldn't be nearly as widespread.
That still makes it seem like the purpose of high-level programming languages is to act as a crutch for bad coders, which is a misunderstanding of their purpose. If using a high-level programming language implies sloppiness, then it also implies a desire for writing clean code at high speed - it goes both ways, see? Sure, in a perfect world we'd all be writing our programs in assembler, spending years polishing and optimising it until the tiniest flaw was as morning dew before the mid-day sun - but in reality, the customer wants the system in four months, not four years. If we have to sacrifice a processor cycle or two on the altar of Business Requirements, that is what we must do to appease our dark masters. Get to work, code slave.
Yeah, it's unfortunate that a programmer isn't usually judged by how beautiful, concise, or elegant his code is but rather by how well the program he works on is received by users. If the program does its job well enough, all manner of things can be overlooked. And in the case of a client of any sort, it's the user interface that takes the cake. You could have the most robust and awesome C-based back-end ever devised, but if the interface into it sucks, the whole solution sucks. Further, the GUI can hog memory and crash occasionally, but if it's intuitive and easy, it will be acceptable.
It's a tough pill to swallow, I can agree- Having been working the last two years to straighten out a port of C# to Qt C++ that was basically done as copy pasta and then beaten with a hammer until it compiled, I know all too well the problems of memory management crap. But there wasn't time for a proper C++ port, and looking back, even though I'm putting out all these fires someone else created, it was the right call to rush it like that, because now we have this new product and it's bringing in more work which means I still have a job. Just to reinforce the trade off ideas that are core to engineering.
A couple of Spolsky articles that changed my views on this subject: <a href="http://www.joelonsoftware.com/articles/fog0000000069.html" target="_blank">http://www.joelonsoftware.com/articles/fog0000000069.html</a> <a href="http://www.joelonsoftware.com/articles/fog0000000356.html" target="_blank">http://www.joelonsoftware.com/articles/fog0000000356.html</a>
I would like to say a few words for programming novices everywhere. Thank god I don't need to understand memory management and best programming practices in order to quickly kludge together something that gets the job done in VB. I'm not shipping a software product to customers or writing anything that will power the website of my company, just writing some simple tools that no more than a handful of people inside the company will use. But these tools I kludge together have effectively doubled our team's productivity.
Myself and the other analyst in our department are so thankful we're not officially in IT and don't have to deal with all the red tape they do. If someone requests a tool from IT, it sits for a month in the queue, then they propose it to senior management, discuss priority, investigate options, hire a consultant, scrap it 3 times, go overbudget, and if and when it's ever finished, no one even needs it anymore. When *we* need a tool, we just build it.
remiremedy [blu.knight]Join Date: 2003-11-18Member: 23112Members, Super Administrators, Forum Admins, NS2 Developer, NS2 Playtester
<!--quoteo(post=1819881:date=Dec 28 2010, 07:56 AM:name=esuna)--><div class='quotetop'>QUOTE (esuna @ Dec 28 2010, 07:56 AM) <a href="index.php?act=findpost&pid=1819881"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Hungarian notation. Discuss.<!--QuoteEnd--></div><!--QuoteEEnd--> <a href="http://www.joelonsoftware.com/articles/Wrong.html" target="_blank">http://www.joelonsoftware.com/articles/Wrong.html</a> There's the article on why Hungarian notation (as it was originally designed, misinterpreted, and then rediscovered) is good.
Labeling by compiler type is not useful with Intellisense and everything. Using Hungarian for things like pointers or, as in the article above, safe vs unsafe strings can help the programmer to more easily spot errors.
Read the article if you don't believe me. "Systems Hungarian" is crap. But the spirit in which Hungarian was designed ("Apps Hungarian") actually does provide some good value.
<!--quoteo(post=1819909:date=Dec 28 2010, 05:10 PM:name=Psyke)--><div class='quotetop'>QUOTE (Psyke @ Dec 28 2010, 05:10 PM) <a href="index.php?act=findpost&pid=1819909"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec--><a href="http://www.joelonsoftware.com/articles/Wrong.html" target="_blank">http://www.joelonsoftware.com/articles/Wrong.html</a> There's the article on why Hungarian notation (as it was originally designed, misinterpreted, and then rediscovered) is good.
Labeling by compiler type is not useful with Intellisense and everything. Using Hungarian for things like pointers or, as in the article above, safe vs unsafe strings can help the programmer to more easily spot errors.
Read the article if you don't believe me. "Systems Hungarian" is crap. But the spirit in which Hungarian was designed ("Apps Hungarian") actually does provide some good value.<!--QuoteEnd--></div><!--QuoteEEnd-->
Yeah using notation to distinguish safe and unsafe strings and pointers is pretty useful, and the whole article he writes about adopting policies that tend towards error detecting while coding is great stuff.
Programming in Microsoft's api is soooo painful, though, and like it or not, that's what Hungarian notation brings to mind. I hate having to remember that a word is 16 bit unsigned, even on a 32bit machine. I hate to keep bringing up my love affair with Qt, but they're typedefs are lots better: quint16, quint32, qint64, etc.
On the string side, though, I guess you can't get much better that what Microsoft already does. Let's be honest, the api is the culmination of 50+ years of development, it's a miracle that it's still compatible with programs written for Vista.
Heap corruption is not really a problem it simply requires tools. Microsoft C++ runtime checks + application verifier + your own operator new override + valgrind combo beats every memory bug. Now the really interesting bugs are those where you overflow the stack or somehow overwrite code. (x86 + NX doesn't really allow it but there are other platforms)
Now about languages. Learn C, you'll be better C++ coder. Learn C#, you'll be better C++ coder. Learn Lisp, you'll be better C++ coder. Learn Haskell, you'll be better C++ coder. At the end you probably won't be so eager to write everything in C++.
When it comes to high level languages I can write program that downloads stock quotes from Yahoo then processes them and graphs the result in 10-20 lines. All asynchronously. As efficient as possible. In C# it's around 50-500 lines depending on whether I want it to be fast (not wasting time waiting for previous HTTP request to complete). In C++ it'll take even more. High level of abstraction (abstraction hides unimportant details only, see definition) means shorter programs which means less bugs.
Garbage collection is simply best solution there is for multithreaded applications. C++ nerds (who never saw anything else nor wrote any bigger multithreaded app) will brag how everything should use RAII or shared/boost_ptrs. Because of exceptions you can't use C's way of dealing with memory. If you put new/delete in ctor/dtor then it works only with single thread. Ref counting implemented by boost/shared/whatever ptrs is obviously worse than GC because it leads to leaks (cycles). Manual weak references are not a solution. But let's say you don't ever have cycles. It's still bad because reference counting is painfully slow in multithreaded environment. So now we know that we have to use GC. But C++ is designed in a way that doesn't allow use of efficient garbage collector. See: <a href="http://flyingfrogblog.blogspot.com/2011/01/boosts-sharedptr-up-to-10-slower-than.html" target="_blank">http://flyingfrogblog.blogspot.com/2011/01...lower-than.html</a>
C++ has worst design out of any popular language (yes, worse than Java :P) but well it still has best compilers out there, lots of libraries and simple interop with C. That's why I like "C++ annotations" book that untangles all the mess in C++'s design piece by piece.
Hungarian for safe/unsafe? I can do that with type system: newtype UnsafeString = UnsafeString { unpackUnsafeString :: String } That's painful with C++ but not with Haskell (or any language with nice type system).
Comments
Yet I do. It requires an interpreter on the VAST majority of machines (I don't count that one ring they had made to promote it), thereby making it interpreted code. Java coders harp on and on, but it still needs an interpreter to run on hardware anyone actually uses day-to-day. Just like Visual Basic can be 'compiled'.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Besides that, if we are to write off progress in memory management at a high language level because it's sloppy<!--QuoteEnd--></div><!--QuoteEEnd-->
We have very different ideas of what constitutes 'progress' in memory management. Making it so lazy coders don't have to track their usage and clean up after themself, leading to massive memory holes and constant Full-GC events being a matter of course is just outright SHODDY. Not providing a method for freeing that memory OTHER than the automated collection, or collection-hinting is even more unforgivable.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Next, consider the flexibility of C#'s file system. In C++, a class must be entirely declared within one file - usually a header. The definition of the class, the real source, can be split up any way you want over how many files you want, but you declare part of a class here and the other part over there.<!--QuoteEnd--></div><!--QuoteEEnd-->
So... not having to keep your functions small and distinct, and using a hierarchical order to keep everything organized... is a good thing? Somehow? Oh, right. Lazy-ass coders are the target. Forgot for a moment.
Short version, it's two methods of thinking. 'Kick it out the door' shovelware... or old-school, <i>proper</i> coding.
Otherwise known by most crapware coders as being anal-retentive. Funny how the anal retentive guy's code is commented, maintainable, and runs several orders of magnitude faster and more stable. That isn't to say you can't write a competent program with the rapid-development dreck. I'm sure you can, given time and effort to work around the shortcomings of the 'modern' memory management. It's just saying that it <u>encourages</u> crapware.
As someone who has to then pick up your code later and fix it, I hate you.
Here's a wikipedia reference: <a href="http://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B" target="_blank">http://en.wikipedia.org/wiki/Comparison_of_Java_and_C%2B%2B</a> . Yes, Java is nearly always slower on execution, but the tradeoff is the quicker development provided by common code at a much higher level. Not having to manage memory isn't always a bad thing either. I know from experience that enterprise level applications will get away from you when using pointers - especially when the team doesn't handle them properly. It is nearly impossible to find a buried heap corruption or access violation due to misused pointers. With java or C# this isn't an issue, and you buy this security at the cost of extra memory usage. If a programmer is bad, they will write bad code in whatever language they use - at least with Java, they're bad program might still be useful.
<!--quoteo(post=1816952:date=Dec 18 2010, 03:05 AM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 18 2010, 03:05 AM) <a href="index.php?act=findpost&pid=1816952"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->So... not having to keep your functions small and distinct, and using a hierarchical order to keep everything organized... is a good thing? Somehow? Oh, right. Lazy-ass coders are the target. Forgot for a moment.<!--QuoteEnd--></div><!--QuoteEEnd-->
I'm not sure I understand your meaning. The ability in C# to write partial classes makes both of those things easier. It's surely true that both of those are very important. The example I was talking about here is one from my own experience. We had a whole set of boilerplate functions for our classes in Qt C++ that allowed us to stream them to and from XML and binary. This code was generated with a python script. When a member variable changed, we had to regenerate. If we could have written partial classes (two header files to declare the class, and two source files to define it) then it would have been very simple. gen_class.h, class.h, gen_class.cpp, class.cpp. Any programmer written could would go in class.h, class.cpp. The generator would have full control over gen_class.h and gen_class.cpp. As it was, we had to make the script preserve sections of the header that were programmer written by keying on a particular comment in them. It was very very easy for us to lose work.
<!--quoteo(post=1816952:date=Dec 18 2010, 03:05 AM:name=Talesin)--><div class='quotetop'>QUOTE (Talesin @ Dec 18 2010, 03:05 AM) <a href="index.php?act=findpost&pid=1816952"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Short version, it's two methods of thinking. 'Kick it out the door' shovelware... or old-school, <i>proper</i> coding.
Otherwise known by most crapware coders as being anal-retentive. Funny how the anal retentive guy's code is commented, maintainable, and runs several orders of magnitude faster and more stable. That isn't to say you can't write a competent program with the rapid-development dreck. I'm sure you can, given time and effort to work around the shortcomings of the 'modern' memory management. It's just saying that it <u>encourages</u> crapware.<!--QuoteEnd--></div><!--QuoteEEnd-->
It's funny you describe the trade off, but pick one as the correct course of action. Engineering is all about trade offs, and there are no wrong answers, just answers that work to varying degrees of acceptability. If you build a bridge that is required to hold 200 tons, but you make it hold 600 tons, you have over-engineered the solutions and probably wasted resources.
Not every application written needs to run for years without the system restarting. Many applications only need to run for an hour or less. On today's computers, it could gobble 200K of memory a second for an hour of run time and still perform it's job. If the program does useful work, that's all that matters.
As far as the time and effort required to work around these 'shortcomings,' I'd have to say that's bunk. The whole point of high level languages is to shorten the time required to develop by providing extensive modern libraries. As I described with the TCP/IP example earlier. Memory management is more about making the program maintainable and stable while also reducing the number of possible bugs.
You're right that managing your own memory is tough to learn and master, and it does pay off and it's actually pretty exciting to write something complex that way. This is why I'm a big fan of Qt. I look for Nokia to turn it into a cross platform .Net soon.
So... your argument is that sloppy coding by idiots who can't manage their own memory is 'okay', because the tool somewhat picks up the slack. This is the entire argument that I am against. Stupid coders need to learn how to manage memory, not switch to a language that allows their idiocy to continue.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->It's funny you describe the trade off, but pick one as the correct course of action.<!--QuoteEnd--></div><!--QuoteEEnd-->
Yes, because it's not a tradeoff. It's a right way, and a lazy way.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->Engineering is all about trade offs, and there are no wrong answers, just answers that work to varying degrees of acceptability. If you build a bridge that is required to hold 200 tons, but you make it hold 600 tons, you have over-engineered the solutions and probably wasted resources.
Not every application written needs to run for years without the system restarting. Many applications only need to run for an hour or less. On today's computers, it could gobble 200K of memory a second for an hour of run time and still perform it's job. If the program does useful work, that's all that matters.<!--QuoteEnd--></div><!--QuoteEEnd-->
This is the kind of thinking that lazy coding makes. Yes, speed of development and 'kicking it out the door' is important to consider as a manager. If you're writing a one-off, I can see hashing something together quickly. If it's something that will be used for a long while, taking the time to do it right is extremely important... the difference between standing on a can of paint to reach something, or going out and getting a damn ladder. Idiots and managers who don't understand why a quick hack can't serve as a day to day mainstay deserve what they get. Everyone else shouldn't suffer for the slipshod BS that stupid coders try to pass off as a 'fix'.
<!--quoteo--><div class='quotetop'>QUOTE </div><div class='quotemain'><!--quotec-->You're right that managing your own memory is tough to learn and master, and it does pay off and it's actually pretty exciting to write something complex that way. This is why I'm a big fan of Qt. I look for Nokia to turn it into a cross platform .Net soon.<!--QuoteEnd--></div><!--QuoteEEnd-->
No thanks on the Qt side. But again, learning to actually manage your own damn memory is an important skill for any coder who isn't just hashing something together, and wants to write <i>proper</i> code.
If you can't tell, I have to run support interference for a very badly-written Java hodgepodge sprawl of suck, written by keyboard-flinging monkeys, that will regularly try to run a full GC loop every twenty seconds... which doesn't allow the last run-through to complete before the next one kicks off. And management won't allow this mission-critical core to be rewritten in C, due to the time and development costs to move it over to a sane model. Yet are getting hammered on why they can't host more than three processes at a time on each server without running out of memory entirely.. when each is running 32GB.
So yes, much Java hate. Learn C, and take some of the raving ****wit-ism out of the world.
No, he's arguing that manual memory management is not always necessary, and that doing it manually when you don't have to is what my old english teacher used to call "idiot work." Furthermore, you don't always need the best coder, you just need someone who's good enough. If coder A and coder B could both write the right code in Java but coder B would mess up his pointers in C, yes I suppose that means that A is better than B. But the fact remains that there's nothing wrong with B's Java code.
If I didn't know better I'd think you were deliberately misconstruing his arguments.
While a large part of it stems from the prior programmers being lazy/incompetent/not knowing enough underlying theory, a small portion is really just because managers are stupid or someone rolls a 1 on their INT check.
So yeah. In some cases Talesin is being a total dork. You don't always have to use C when Java will work just fine (see one-shot code, ugly Perl scripting that "just works" for some parsings and working as glue between heavy processes). On the other hand, core processes that are 10x slower due to someone being stupid and writing garble in Java/Python 'cause they don't know how to manage their memory and assume the compiler will handle it are a real and present danger to the sanity and success of many companies.
It's all about scope. And while Talesin may not be right in all cases, his anger is justified.
Valid.
Also, on a side note, I have had people who only knew Java and once they learned assembly and C they became startling better. On the other hand, I've seen lots and lots of people get churned out with a CS Major from the universities, I know they had an assembly/C programming class, and their code is still crap.
It's almost a criteria for Turing completeness.
Just $0.02 because it appears that you guys are using the same points, but getting mad about what eachother are saying. Maybe just misunderstanding?
A crappy coder is crappy nomatter what language.
A program written in C by a crappy coder will probably reveal that he/she is terrible.
A program written in something like C#/Java will act as a crutch for crap skill. Sure, it works, but it's masking malignant badness.
It's like breast implants. Nobody actually blames the silicone for being fake. They mostly look and act the part. And they don't directly say anything about their owner. But they <i>imply</i> a lot about their owner.
I think the point Talesin is trying to get across is that there are basically frauds running around convincing people they're the real deal, and without the crutch they wouldn't be nearly as widespread.
(o Y o) .... think about it.
It's a tough pill to swallow, I can agree- Having been working the last two years to straighten out a port of C# to Qt C++ that was basically done as copy pasta and then beaten with a hammer until it compiled, I know all too well the problems of memory management crap. But there wasn't time for a proper C++ port, and looking back, even though I'm putting out all these fires someone else created, it was the right call to rush it like that, because now we have this new product and it's bringing in more work which means I still have a job. Just to reinforce the trade off ideas that are core to engineering.
A couple of Spolsky articles that changed my views on this subject:
<a href="http://www.joelonsoftware.com/articles/fog0000000069.html" target="_blank">http://www.joelonsoftware.com/articles/fog0000000069.html</a>
<a href="http://www.joelonsoftware.com/articles/fog0000000356.html" target="_blank">http://www.joelonsoftware.com/articles/fog0000000356.html</a>
Myself and the other analyst in our department are so thankful we're not officially in IT and don't have to deal with all the red tape they do. If someone requests a tool from IT, it sits for a month in the queue, then they propose it to senior management, discuss priority, investigate options, hire a consultant, scrap it 3 times, go overbudget, and if and when it's ever finished, no one even needs it anymore. When *we* need a tool, we just build it.
But I'm getting off topic.
:(
Discussion over.
Discussion over.<!--QuoteEnd--></div><!--QuoteEEnd-->
Turn that frown upside down. I like it.
<a href="http://www.joelonsoftware.com/articles/Wrong.html" target="_blank">http://www.joelonsoftware.com/articles/Wrong.html</a>
There's the article on why Hungarian notation (as it was originally designed, misinterpreted, and then rediscovered) is good.
Labeling by compiler type is not useful with Intellisense and everything. Using Hungarian for things like pointers or, as in the article above, safe vs unsafe strings can help the programmer to more easily spot errors.
Read the article if you don't believe me. "Systems Hungarian" is crap. But the spirit in which Hungarian was designed ("Apps Hungarian") actually does provide some good value.
There's the article on why Hungarian notation (as it was originally designed, misinterpreted, and then rediscovered) is good.
Labeling by compiler type is not useful with Intellisense and everything. Using Hungarian for things like pointers or, as in the article above, safe vs unsafe strings can help the programmer to more easily spot errors.
Read the article if you don't believe me. "Systems Hungarian" is crap. But the spirit in which Hungarian was designed ("Apps Hungarian") actually does provide some good value.<!--QuoteEnd--></div><!--QuoteEEnd-->
Yeah using notation to distinguish safe and unsafe strings and pointers is pretty useful, and the whole article he writes about adopting policies that tend towards error detecting while coding is great stuff.
Programming in Microsoft's api is soooo painful, though, and like it or not, that's what Hungarian notation brings to mind. I hate having to remember that a word is 16 bit unsigned, even on a 32bit machine. I hate to keep bringing up my love affair with Qt, but they're typedefs are lots better: quint16, quint32, qint64, etc.
On the string side, though, I guess you can't get much better that what Microsoft already does. Let's be honest, the api is the culmination of 50+ years of development, it's a miracle that it's still compatible with programs written for Vista.
Now the really interesting bugs are those where you overflow the stack or somehow overwrite code. (x86 + NX doesn't really allow it but there are other platforms)
Now about languages. Learn C, you'll be better C++ coder. Learn C#, you'll be better C++ coder. Learn Lisp, you'll be better C++ coder. Learn Haskell, you'll be better C++ coder. At the end you probably won't be so eager to write everything in C++.
When it comes to high level languages I can write program that downloads stock quotes from Yahoo then processes them and graphs the result in 10-20 lines. All asynchronously. As efficient as possible. In C# it's around 50-500 lines depending on whether I want it to be fast (not wasting time waiting for previous HTTP request to complete). In C++ it'll take even more. High level of abstraction (abstraction hides unimportant details only, see definition) means shorter programs which means less bugs.
Garbage collection is simply best solution there is for multithreaded applications. C++ nerds (who never saw anything else nor wrote any bigger multithreaded app) will brag how everything should use RAII or shared/boost_ptrs. Because of exceptions you can't use C's way of dealing with memory. If you put new/delete in ctor/dtor then it works only with single thread. Ref counting implemented by boost/shared/whatever ptrs is obviously worse than GC because it leads to leaks (cycles). Manual weak references are not a solution. But let's say you don't ever have cycles. It's still bad because reference counting is painfully slow in multithreaded environment. So now we know that we have to use GC. But C++ is designed in a way that doesn't allow use of efficient garbage collector. See:
<a href="http://flyingfrogblog.blogspot.com/2011/01/boosts-sharedptr-up-to-10-slower-than.html" target="_blank">http://flyingfrogblog.blogspot.com/2011/01...lower-than.html</a>
C++ has worst design out of any popular language (yes, worse than Java :P) but well it still has best compilers out there, lots of libraries and simple interop with C. That's why I like "C++ annotations" book that untangles all the mess in C++'s design piece by piece.
Hungarian for safe/unsafe? I can do that with type system: newtype UnsafeString = UnsafeString { unpackUnsafeString :: String } That's painful with C++ but not with Haskell (or any language with nice type system).