November 23rd, 2024

Don’t look down on print debugging

Print debugging is a straightforward and effective method for identifying code issues, involving placing print statements, analyzing output, and removing them after resolution, while automated tests enhance the debugging process.

Read original articleLink Icon
Don’t look down on print debugging

The article discusses the often-overlooked value of print debugging in programming. Despite the prevalence of sophisticated debugging tools, print debugging remains a straightforward and effective method for identifying issues in code. The author argues that print statements help programmers focus on understanding their code and verifying assumptions, making it a valuable practice rather than a shameful one. The workflow for print debugging involves discovering a problem, strategically placing print statements, analyzing the output, fixing the issue, and then removing the print statements before finalizing the code. The author emphasizes that while more advanced tools can be beneficial, print debugging should not be dismissed due to its simplicity and universality. Additionally, the article highlights the importance of building automated tests alongside code to facilitate easier debugging, as it allows developers to isolate problems more effectively. The author concludes by acknowledging that print debugging is a legitimate and powerful tool that can complement more sophisticated methods.

- Print debugging is a simple yet effective method for identifying code issues.

- The workflow involves placing print statements, analyzing output, and removing them after resolving the issue.

- Automated tests can enhance the debugging process by isolating problems.

- Print debugging should not be dismissed in favor of more complex tools.

- Understanding code through print statements can lead to better debugging practices.

Link Icon 55 comments
By @llm_nerd - 5 months
While print-type debugging has a place, the reason there are a lot of articles dissuading the practice is the observed reality that people who lean on print debugging often have incomplete knowledge of the immense power of modern debugging tools.

This isn't just an assumption I'm making: years of being in developer leadership roles, and then watching a couple of my own sons learning the practice, has shown me in hundreds of cases that if print-type debugging is seen, a session demonstrating how to use the debugger to its fullest will be a very rewarding effort. Even experienced developers from great CS programs sometimes are shocked to see what a debugger can do.

Walk the call stack! See the parameters and values, add watches, set conditional breakpoints to catch that infrequent situation? What! It remains eye opening again and again for people.

Not far behind is finding a peer trying to eyeball complexity to optimize, to show them the magic of profilers...

By @montroser - 5 months
Some of the trickiest bugs to hung down are those involving concurrency and non-deterministic timing. In these cases, stepping through a debugger is not at all what you want, since you may actually change the timing just by trying to observe.

To see the nature of the race condition, just put some print statements in some strategic locations and then see the interleaving, out of order, duplicate invocations etc that are causing the trouble. It's hard to see this type of stuff with a debugger.

By @cjs_ac - 5 months
The problem with all these 'print debugging is good' and 'print debugging articles are bad' is that none of them provide any context. Print debugging is just one tool in a box full of tools. Pick the right one for the right job. An article that walks through how to make that decision would be very useful. An article that just picks a side in a decades-old debate is just noise.
By @Al-Khwarizmi - 5 months
I'm a CS professor teaching some programming courses. I always make an effort to teach my students how to use a debugger and encourage them to use it, because otherwise they will not even try (print debugging comes naturally, while using the debugger requires more effort). I want them to master it so they can make conscious decisions about what to use.

Then, when I code myself, I use print debugging like 99.9% of the time :D I have the feeling that, for me, the debugger tends to be not worth the effort. If the bug is very simple, print debugging will do the job fast so the debugger would make me waste time. If the bug is very complex, it can be difficult to know where to set the breakpoints, etc. in the debugger (let alone if there's concurrency involved). There is a middle ground where it can be worth it but for me, it's infrequent enough that it doesn't seem worth the effort to spend time making the decision on whether to use the debugger or not. So I just don't use it except once in a blue moon.

I'm aware this can be very personal, though, hence my tries to have my students get some practice with the debugger.

By @piinbinary - 5 months
I like debuggers and use them when I can, but folks who say you should only use debuggers tend to not realize:

* Not all languages have good debuggers.

* It's not always possible to connect a debugger in the environment where the code runs.

* Builds don't always include debug symbols, and this can be very high-friction to change.

* Compilers sometimes optimize out the variable I'm interested in, making it impossible to see in a debugger. (Haskell is particularly bad about this)

* As another commenter mentioned, the delay introduced by a debugger can change the behavior in a way that prevents the bug. (E.g. a connection times out)

* In interpreted languages, debuggers can make the code painfully slow to run (think multiple minutes before the first breakpoint is hit).

One technique that is easier to do in printf debugging is comparing two implementations. If you have (or create) one known-good implementation and have a buggy implementation, you can change the code to run both implementations and print when there's a difference in the result (possibly with some logic to determine if results are equivalent, e.g. if the resulting lists are the same up to ordering).

By @bbkane - 5 months
I think in many scenarios, print debugging wins the cost/benefit analysis - new project where I don't want to learn how to set up a debugger, trying to debug something really custom or specially formatted, etc.

However, if I know I'm going to be working on a project for a long time, I usually try to pay the upfront cost of setting up a debugger for common scenarios (ideally I try to make it as easy as hitting a button). When I run into debugging scenarios later, the cost/benefit analysis looks a lot better - set a breakpoint, hit the "debug" button, and boom, I can see all values in scope and step through code.

By @cdrini - 5 months
I agree with the title that you shouldn't look down on print debugging. You should look down on people who are slow at fixing bugs and who refuse to try tooling to be able to keep pace -- and using print statements can sometimes be a marker of this. But print debugging is just a tool and has its place, and the use of print debugging is not itself an indicator of poor developer performance. If you can find/fix bugs as quickly as I can with a debugger using print statements, then I don't care what you use. If you can do it faster, then I'm going to try to steal your techniques so I can be faster too. If you do it slower, then I hope you would steal my techniques.

Don't care about the tool care about the performance.

Anecdotally, debuggers are faster than print statements in most cases for me. I've been able to find bugs significantly faster using a debugger than with using print statements. I still do use print statements on occasion when I'm developing something where a debugger is very complicated to set up, or in cases where I'm dealing with things happening in parallel/async, where a debugger is less suited. I'm not going to shame you for using print statements, but I do hope that you've tried both and are familiar/comfortable with both approaches and can recognise their strengths/weaknesses -- something I'm not convinced of by this author, which only outlines the strengths of one approach.

Also not a fan of the manufactured outrage of saying people are being "shamed" for using print statements. Coupled with listing a bunch of hyperbolic articles -- many of which don't even seem to be about debugging but about logging libraries.

(Also as a side note: don't forget if you are using print statements for debugging to check if your language buffers the print output!! You'll likely want to have it be unbuffered if you're using print for debugging)

By @gregjor - 5 months
If you feel shame because of random opinions and articles on the internet, I’d address that before worrying about “print” vs. a real debugger.

I pretty much only use print debugging. I know how to use a real debugger but adding print/console.log etc. keeps me from breaking context and flow.

By @anotherevan - 5 months
Talking about print debugging has reminded me of a time I spent two weeks using a sideways form of print debugging to track down a timing bug. It was on an embedded system, and the bug when tripped would take out the serial communications line: at which point I couldn't get any diagnostics, not even print statements!

An in-circuit emulator was unavailable, so stepping through with a debugger was also not an option.

I ended up figuring out a way to be able to poke values into a few unused registers in an ancillary board within the system, where I could then read the values via the debug port on that board.

So I would figure out what parts of the serial comms code I wanted to test and insert calls that would increment register addresses on the ancillary board. I would compile the code onto a pair of floppy disks, load up the main CPU boards and spend between five and ninety minutes triggering redundancy changeovers until one of the serial ports shat itself.

After which I would probe the registers of the corresponding ancillary board to see which register locations were still incrementing and which were not, telling me which parts of the code were still being passed through. Study the code, make theories, add potential fixes, remove register increments and put in new ones, rinse and repeat for two weeks.

By @andreimatei1 - 5 months
Print debugging is the tool most people reach for when they can, but its biggest problem is that you have to change the source code to add the printfs. This is impractical in many circumstances; it generally only works on your local machine. In particular, you can't do that in production environments, and that's where the most interesting debugging happens. Similarly, traditional debuggers are not available in production either for a lot of modern a software -- you can't really attach gdb to your distributed service, for many reasons.

What print debugging and debuggers have in common, in contrast to other tools, is that they can extract data specific to your program (e.g values of variables and data structures) that your program was not instrumented to export. It's really a shame that we generally don't have this capability for production software running at scale.

That's why I'm working on Side-Eye [1], a debugger that does work in production. With Side-Eye, you can do something analogous to print debugging, but without changing code or restarting anything. It uses a combination of debug information and dynamic instrumentation.

[1] https://side-eye.io/

By @frou_dh - 5 months
Shame? My perception is that the situation is quite the opposite, in that tons of people online proudly proclaim at any opportunity that they only use print-debugging.

Which in some cases I see as related to a sort of macho attitude in programming where people are oddly proud of forgoing using good tooling (or anything from the 21st century really).

By @fortyseven - 5 months
If anyone is making you feel ashamed for using one of the most fundamental, bread and butter debugging techniques, that's a red flag about that person. Not you. If there are better tools available, fine, use 'em. But there is absolutely nothing wrong with tossing out a console log to see what's going on.
By @Barrin92 - 5 months
Print debugging is a disaster. If there's one widespread practice among non-beginner developers that wastes hours of time it's print debugging. I honestly can't tell how often I have seen people, even experienced programmers (usually because they insist on running some bespoke vim setup) refuse to use a graphical debugger to actually step through a program, and instead they spend hours hunting down bugs they could have found in ten minutes.

There's a section of an interview with John Carmack (https://youtu.be/tzr7hRXcwkw) where he laments the same thing. It's what the Windows/game development corner of the programming world actually got right, people generally use effective tools for software development.

By @technothrasher - 5 months
Hmm, all those "don't use print" headlines shown in the article seem to be simply click-bait headlines for articles that aren't really shaming print debugging but instead illustrating other debugging tools that some programmers may not know about.

I remember the good old days when I was first learning programming with Applesoft BASIC where print debugging was all there was, and then again in my early days of 8051 programming when I didn't yet have the sophisticated 8051 ICE equipment to do more in depth debugging. Now with the ARM Cortex chips I most often program and their nice SWD interface, print debugging isn't usually necessary. But I still use it occasionally over a serial line because it is simple and why not?

By @imtringued - 5 months
Printing gives you a trace, breakpoints give you a point in time. They are two different things.

The closest between the two is a logging breakpoint, but the UI for them is generally worse than the UI of the main editor and the logging breakpoint has the same weakness as regular print calls, i.e. you've turned the data into a string and can therefore no longer inspect the objects in the trace.

What I would expect from a debugger in IntelliJ is that when you set a logging breakpoint, then the editor inserts the breakpoint logic source code directly inline with the code itself, so that you can pretend that you are writing a print call with all the IDE features, but the compiler never gets to see that line of code.

By @Terr_ - 5 months
To me there are three requirements for me to be comfortable with a team culture of print-debugging.

1. If a breakpoint debugger exists for the stack, it should still be convenient and configured, and the programmer should have some experience using it. It's a skill/capability that needs to be in reserve.

2. The project has automatic protections against leftover statements being inadvertently merged into a major branch.

3. The dev environment allows loading in new code modules without restarting the whole application. Without that, someone can easily get stuck in rather long test iterations, especially if #1 is not satisfied and "it's too much work" to use another approach.

By @nickcw - 5 months
Debugging with print has one feature that debuggers will never match. You can send the binary or code with print/log statements to someone else who is experiencing the problem and get them to run it.

Often I have to debug bugs I can't reproduce. If method 1 - staring at the code - doesn't work, then it's add print/log statements and send it to the user to test. Repeat until you can reproduce the bug yourself or you fixed it.

By @GuB-42 - 5 months
I think print debugging is a symptom of an underlying problem.

There is nothing bad about print debugging, there is no reason to avoid it if that's what works with your workflow and tools. The real question is why you are using print and not something else. In particular, what print does better than your purpose-built debugger? If the debugger doesn't get used, maybe one should look down on that particular tool and think of ways of addressing the problem.

I see many comments against print debugging that go around the lines of "if you learn to use a proper debugger, that's so much better". But in many modern languages that's actually the problem, you have to invest a lot of time and effort on something that should be intuitive. I remember when I started learning programming, with QBasic, Turbo Pascal, etc... using the debugger was the default, and so intuitive I used a debugger before even knowing what a debugger was! And it was 90s tech, now we have time travel debugging, hot reloading, and way more capable UIs, but for some reason, things got worse, not better. Though I don't know much about it, it seems the only ones who get it right are in the video game industry. The rest tend to be stuck with primitive print debugging.

And I say "primitive" not because print debugging is bad in general, but because if print debugging was really to be embraced, it could be made better. For example by having dedicated debug print functions, an easy way to access and print the stack trace, generic object print, pretty printers, overrides for accessing internal data, etc... Some languages already have some of that, but often stopping short of making print debugging first class. Also, it requires fast compilation times.

By @scotty79 - 5 months
I think that persistence of print debugging shows weaknesses of debuggers.

Complicated setup, slow startup, separate custom UI for adding watches and breakpoints.

Make a debugger integrated with the language and people will use it.

You can then pile up on it subsequent useful features but you have to get basic UI right first. Because half of programmers now are willing to give up stepping, tree inspection even breakpoints just to avoid dealing with the crappy UI of debuggers.

By @lispm - 5 months
Esoteric: I use print debugging on a Lisp Machine using a presentation based Read Eval Print Loop (REPL), similar things would work in some other Common Lisp environments. Presentation based means that the REPL remembers all output and the objects associated with that output.

    (defmethod move ((a-ship object) (a-place port))
      ; do something
      )

    (defmethod move :around (what where)
      (print `(start moving object ,what to ,where))
      (call-next-method))
Above prints the list to the REPL. The REPL prints the list as data, with the objects WHAT and WHERE included. It remembers that a specific printed output is caused by some object. Later these objects can be inspected or one can call functions on them...

This combines print debug statements with introspection in a read-eval-print-loop (REPL).

Writing the output as :before/:around/:after methods or as advise statements, makes it later easier to remove all print output code, without changing the rest of the code. -> methods and advises can be removed from the code at runtime.

By @david-gpu - 5 months
If a good debugger is available, it is a great tool to have. But it is just one out of many tools. Some are more effective than others in different situations.

For example, I rarely used a debugger in my career as an Android driver developer (mostly C), for several reasons.

1. My first step when debugging is looking at the code to build working hypotheses of what sort of issues could be causing the incorrect behavior that is observed.

2. I find assertions to be a great debugging tool. Simply add extra assertions in various places to have my expectations checked automatically by the computer. They can typically unwind the stack to see the whole call trace, which is very useful.

3. Often, there only choice was command-line GDB, which iI found much slower than GUI debuggers.

4. Print statements can be placed inside if statements, so that you only print out data when particular conditions occur. Debuggers didn't have as much fine control.

5. Debugging multi threaded code. Prints were somewhat less likely to interfere with race conditions. I sometimes embedded sleep() calls to trigger different orderings to occur.

By @ryandrake - 5 months
Learning how to use my C debugger felt like a super power. None of my (mid-90s') CS courses even mentioned the existence of a debugger let alone how to use one. First job I had to learn on the fly, and it was one of the most useful tools I picked up post-university.

Print debugging was pretty useless back then because compilation took minutes (a full compile took over an hour) rather than milliseconds. If your strategy was "try something, add a print, compile, try something else, add a print, compile" then you were going to have a very bad time.

People working on modern, fast-dev-cycle, interpreted languages today have it easy. You don't know the terror of looking at your code, making sure you have thought of "everything that you're going to need to debug that problem" and hitting compile, knowing that you'll know after lunch whether you have enough debugging information included. I'm sure it was even worse in the punch card era!

By @anotherpaulg - 5 months
I have long relied on a print-debug function I wrote for python called dump(). You do dump(foo) and it will print out "foo: value". Where the variable name "foo" is magically pulled out of the source code and "value" is a json-dump of its value. So dicts look pretty, like the example below.

This is similar to the "debug f-strings" introduced in python 3.8: print(f"{foo=}"). But it's much easier to type dump(foo) and you get prettier output for complex types.

  x = 3 
  foo = dict(bar=1, baz=dict(hello="world"))
  dump(x) 
  dump(foo)
  
  # prints...
  
  x: 3 
  foo:
  {
      "bar": 1,
      "baz": {
          "hello": "world"
      }
  }
https://github.com/Aider-AI/aider/blob/main/aider/dump.py
By @samatman - 5 months
The article draws a distinction between logging and print debugging, which it should, but in recent work that distinction has been less important to me in practice.

I mostly write Zig these days (love it) and the main thing I'm working on is an interactive program. So the natural way to test features and debug problems is to spin the demo program up and provide it with input, and see what it's doing.

The key is that Zig has a lazy compilation model, which is completely pervasive. If a branch is comptime-known to be false, it gets dropped very early, it has to parse but that's almost it. You don't need dead-code elimination if there's no dead code going in to that phase of compilation.

So I can be very generous in setting up logging, since if the debug level isn't active, that logic is just gone with no trace. When a module starts getting noisy in the logs, I add a flag at the top `const extra = false;`, and drop `if (extra)` in front of log statements which I don't need to have printing. That way I can easily flip the switch to get more detail on any module I'm investigating. And again, since that's a comptime-known dead branch, it barely impacts compiling, and doesn't impact runtime at all.

I do delete log statements where the information is trivial outside of the context of a specific thing I'm debugging, but the gist of what I'm saying is that logging and print debugging blend together in a very nice way here. This approach is a natural fit for this kind of program, I have some stubs for replacing live interaction with reading and writing to different handles, but I haven't gotten around to setting it up, or, as a consequence, firing up lldb at any point.

With the custom debug printers found in the Zig repo, 'proper' debugging is a fairly nice experience for Zig code as well, I use it heavily on other projects. But sometimes trace debugging / print debugging is the natural fit to the program, and I like that the language makes it basically free do use. Horses for courses.

By @dhashe - 5 months
Print debugging is a great tool in unfamiliar environments. As the article notes, it’s simple and works everywhere.

I do think that it’s worth learning your debugger well for programming environments that you use frequently.

In particular, I think that the debugger is exceptionally important vs print debugging for C++. Part of this is the kinds of C++ programs that exist (large, legacy programs). Part of this is that it is annoying to e.g. print a std::vector, but the debugger will pretty-print it for you.

I wrote up a list of tips on how to use gdb effectively on C++ projects awhile back, that got some discussion here: https://news.ycombinator.com/item?id=41074703

It is tricky. I understand why people have a bad experience with gdb. But there are ways to make it better.

By @pino82 - 5 months
There are good arguments for both sides, and there is no contradiction. Why shouldn't they both be good tools, depending on the specific case?

I do print debugging most of the times, together with reasoning and some understanding what the code does (!), and I'm usually successful and quick enough with it.

The point here is: today's Internet, with all the social media stuff, is an attention economy. And also some software developers try to get their piece of the cake with extreme statements. They then exaggerate and maximally praise or demonize something because it generates better numbers on Twitter. It'd as simple as that. You shouldn't take everything too seriously. It's people crying for more attention.

By @shpx - 5 months
If anything the effectiveness/necessity of manually adding print statements to get any feedback about what the program you're working on is doing makes me look down on software development in general.

We are working on a system that could have nearly total visibility, down to showing us a simulation of individual electrons moving through wires, yet we're programming basically blind. The default is I write code and run it, without any visual/intuitive feedback about what its doing besides the result. So much of my visual system goes completely unused. Also debuggers can be a pain to set up, way more reading and typing than "print()"

By @oftenwrong - 5 months
I still use a debugger for much of my print debugging needs by setting non-suspending breakpoints. This is useful because it allows me to change the printing dynamically, set breakpoints in library code, attach to running processes, and more.
By @ufmace - 5 months
I've used both print and proper debuggers plenty. I tend to lean on print debugging more these days. The thing about debuggers, in addition to often being a headache to set up, it usually seems tricky and time-consuming to get it to step to the lines you actually want to examine and skip the stuff you don't. And if you step past something but then later realize it was important, time to start over.

It's often faster and easier to set things up to run test cases fast and drop some prints around. Then if there's too much unimportant stuff or something else you want to check on, just switch around the prints and run it again.

By @II2II - 5 months
It feels like the author is making the opposite point when they distinguish between logging and print debugging. Logging being the permanent bits of code, that either ship with the program or are disabled when the code is built for release. Print debugging are temporary bits of code that are manually added and removed as needed and is never intended to ship. If that is the distinction being made, then print debugging is problematic since the developer has to be diligent about removing it once it is not longer needed.

That said, I use print debugging all of the time. It is simply more practical in many cases.

By @thecrumb - 5 months
As a ColdFusion developer (it still pays the bills) I've been doing this forever as Adobe has never really built a good step debugger. Their latest IDE has one but it's very difficult to setup and is buggy when it does work. BoxLang is modernizing CFML (among other things) and has a nice, working step debugger... https://boxlang.io/

It's always so weird to switch to another language which DOES have a debugger...

By @Bengalilol - 5 months
I love print debugging. It gives precise and fast information without any prior setup and when I am fully aware of every bit of code I wrote, it never misleads me.
By @rspoerri - 5 months
For me print debugging ist the best way to work on a script or code if i dont know when debugging is finished. Most debugging sessions (using a debugger) i ever did were very complex situation where i knew how to trigger the error. And while i am sure that you can save debugging sessions i just dont need another tool (to learn) and install on multiple computers where i test and run my code.
By @ghgr - 5 months
If you're using print debugging in python try this instead:

> import IPython; IPython.embed()

That'll drop you into an interactive shell in whatever context you place the line (e.g. a nested loop inside a `with` inside a class inside a function etc).

You can print the value, change it, run whatever functions are visible there... And once you're done, the code will keep running with your changes (unless you `sys.exit()` manually)

By @llm_trw - 5 months
I'd love for someone to explain to be how you can use a debugger cross half a dozen languages. Print works on everything from forth to python. In my experience you need to learn at least one debugger per language with a whole bunch of corner cases where they are outright misleading. Has the situation magically changed in the last 10 years?
By @wkirby - 5 months
My only complaint about print debugging is the sheer volume of commented out console.log statements I see across code bases. Or worse, not commented out and happily logging away on prod. Seriously, leave your console open as you browse around — you’ll be astounded by the amount of debug output just rolling along on production.

Delete your debug cruft!

By @FartyMcFarter - 5 months
Concise argument for print debugging:

Print debugging is how you make software talk back to you. Having software do that is an obvious asset when trying to understand what it does.

There are many debugging tools (debuggers, sanitisers, prints to name a few), all of them have their place and could be the most efficient route to fixing any particular bug.

By @morkalork - 5 months
Debugger vs print? No, the most smug response to that debate I ever heard was "Neither, I use unit tests".
By @psychoslave - 5 months
In the nightmare kitchen sink of web transpilers and meta-framework, yes just print it is often way more efficient than trying to make a sense of these useless stack traces, or set a brittle debugger configuration in your IDE that sooner than later will invariably lake a map to know what code it should show.
By @antman - 5 months
For those doing print debugging in python see the screenshots here https://github.com/likianta/lk-logger I assume it is not known given it only has a few stars. Adds source location to print and exception outputs.
By @Cannabat - 5 months
The article mentions it, but you can sum up print debugging as selectively enabling verbose/trace logging. “We’re about to do X” or “We just did Y, here is Z”.

A debugger gives you insight into the context of a particular code entity - expression, function, whatever.

Seems silly to be dogmatic about this. Both techniques are useful!

By @MrHamburger - 5 months
Print debugging is great if you are debugging multiple processes, maybe even multiple computers at the same time. But it can also be a bane as print debugging is directly influencing run of your program, so it can temporary fix existing or create new bugs by its sheer presence.
By @kvemkon - 5 months
Recent discussion thread about printf-style vs true gdb debugging started here:

https://news.ycombinator.com/item?id=42146864

from "Seer: A GUI front end to GDB for Linux" (15.11.2024)

By @konfekt - 5 months
If you buy in and happen to use Vim, there are plug-ins [0] to print debug by single keystrokes.

[0] : https://github.com/bergercookie/vim-debugstring

By @jmclnx - 5 months
>Print debugging is awesome and you should feel no shame in using it!

print Debugging is my main goto. Plus I have no shame :)

I have this in buffer 'd' on vim and Emacs ready for use:

fprintf(stderr, "DEBUG %s %d -- \n", __FILE__, __LINE__); fflush(stderr);

By @bufferoverflow - 5 months
When your state changes quickly and you can't just pause execution for whatever reason, print debugging works.

Also on the FE it's often much easier to just console.log than to set breakpoints in the sources in your browser.

By @scotty79 - 5 months
Debuggers present you a single moment of execution but print debugging presents you the entire history of execution of relevant fragment all at once.

Create a debugger that has easily accessible history of execution and we can talk.

By @yunusefendi52 - 5 months
To be fair at least, I think in c# one should avoid Console.WriteLine as much as possible, because Console.WriteLine is blocking which make API request slow and instead use logger like serilog and use async sinks
By @FpUser - 5 months
Multithreading and locking are the areas where print/blinking LED on microcontroller based stuff and some other trickery are useful. Otherwise I prefer "normal" debuggers like with powerful IDE
By @amelius - 5 months
Just don't forget to remove your print statements in the end ...
By @anovikov - 5 months
I mostly use print debugging and not because i don't know how "real" debugging works. It's just that most of the code i deal with, deals with data that's flowing real time. So you stopped on a breakpoint, and the app broke already because incoming data and/or what consumes output, didn't stop.

The data streams of course can be simulated, then "true" debugging with breakpoints and watches becomes practical, but the simulation is never 100% and getting it close to 100% is sometimes harder than debugging the app out using print debugging. So with most of the code, i only use debugger to analyse crash dumps.

By @Balgair - 5 months
Wait, there is another way?

Looking at the comments here, I'm going to have to try to figure out how to use a debugger in pycharm on Monday!

Any tips or good videos on this?

By @binary_slinger - 5 months
The shortcoming is print cannot always inspect pointers/objects. Once the debugger hits the breakpoint the entire context can be inspected.
By @dasil003 - 5 months
Is this peak bike-shedding?