By drue


2009-03-05 21:12:13 8 Comments

Edit: Since it appears that there's either no solution, or I'm doing something so non-standard that nobody knows - I'll revise my question to also ask: What is the best way to accomplish logging when a python app is making a lot of system calls?

My app has two modes. In interactive mode, I want all output to go to the screen as well as to a log file, including output from any system calls. In daemon mode, all output goes to the log. Daemon mode works great using os.dup2(). I can't find a way to "tee" all output to a log in interactive mode, without modifying each and every system call.


In other words, I want the functionality of the command line 'tee' for any output generated by a python app, including system call output.

To clarify:

To redirect all output I do something like this, and it works great:

# open our log file
so = se = open("%s.log" % self.name, 'w', 0)

# re-open stdout without buffering
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

# redirect stdout and stderr to the log file opened above
os.dup2(so.fileno(), sys.stdout.fileno())
os.dup2(se.fileno(), sys.stderr.fileno())

The nice thing about this is that it requires no special print calls from the rest of the code. The code also runs some shell commands, so it's nice not having to deal with each of their output individually as well.

Simply, I want to do the same, except duplicating instead of redirecting.

At first thought, I thought that simply reversing the dup2's should work. Why doesn't it? Here's my test:

import os, sys

### my broken solution:
so = se = open("a.log", 'w', 0)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

os.dup2(sys.stdout.fileno(), so.fileno())
os.dup2(sys.stderr.fileno(), se.fileno())
###

print("foo bar")

os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)

The file "a.log" should be identical to what was displayed on the screen.

16 comments

@Jacob Gabrielson 2009-03-16 19:00:09

Since you're comfortable spawning external processes from your code, you could use tee itself. I don't know of any Unix system calls that do exactly what tee does.

# Note this version was written circa Python 2.6, see below for
# an updated 3.3+-compatible version.
import subprocess, os, sys

# Unbuffer output (this ensures the output is in the correct order)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

tee = subprocess.Popen(["tee", "log.txt"], stdin=subprocess.PIPE)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno())

print "\nstdout"
print >>sys.stderr, "stderr"
os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)

You could also emulate tee using the multiprocessing package (or use processing if you're using Python 2.5 or earlier).

Update

Here is a Python 3.3+-compatible version:

import subprocess, os, sys

tee = subprocess.Popen(["tee", "log.txt"], stdin=subprocess.PIPE)
# Cause tee's stdin to get a copy of our stdin/stdout (as well as that
# of any child processes we spawn)
os.dup2(tee.stdin.fileno(), sys.stdout.fileno())
os.dup2(tee.stdin.fileno(), sys.stderr.fileno())

# The flush flag is needed to guarantee these lines are written before
# the two spawned /bin/ls processes emit any output
print("\nstdout", flush=True)
print("stderr", file=sys.stderr, flush=True)

# These child processes' stdin/stdout are 
os.spawnve("P_WAIT", "/bin/ls", ["/bin/ls"], {})
os.execve("/bin/ls", ["/bin/ls"], os.environ)

@drue 2009-03-18 19:10:43

Well, this answer works, so I'll accept it. Still, it makes me feel dirty.

@sorin 2010-06-08 09:05:38

This will not work on non-Unix systems, like Windows.

@sorin 2010-08-06 11:43:15

I just posted a pure python implementation of tee (py2/3 compatible) that can run on any platform and also be used in different logging configurations. stackoverflow.com/questions/616645/…

@anatoly techtonik 2012-12-13 21:49:16

If Python runs on one of my machines and solution doesn't, then that's not a pythonic solution. Downvoted because of that.

@Marcelo MD 2013-06-24 14:35:05

What is the seconde line (sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)) doing? Unbuffering sys.stdout?

@Jacob Gabrielson 2013-06-25 19:10:16

@MarceloMD correct

@Temak 2016-06-09 12:06:04

There is an issue with running this code several times

@Eric H. 2017-11-15 08:50:36

How to restore stdout and stderr before the end of the program ? I tried : tee.kill(); sys.stdout = sys.__stdout__; log file is correctly closed, but print commands that follow are lost...

@Eric H. 2017-11-15 12:15:30

(answer to myself just above) It seems that the 2 os.dup2 commands irreversibly close sys.stdout and sys.stderr (and sys.__stdout__, sys.__stderr__) till the end of the program (unless to re-open them in a hard way, I don't know how...).

@Ken Myers 2017-12-19 20:25:37

According to this post the line sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) no longer works since python 3.3 (see PEP 3116)

@matthieu 2019-04-01 22:34:42

Got the error "sys:1: ResourceWarning: unclosed file <_io.BufferedWriter name=5>", so I had to add tee.stdin.close() at the end of my program. I also get "ResourceWarning: subprocess 1842 is still running", and adding sys.stdout.close(); sys.stderr.close() at the end of the program fixes it.

@Tom 2019-05-12 18:31:06

Can you explain how the unbuffering works? dup2 will silently close the fd before reusing it. So why does it matter to change the buffering? Does dup2 preserve the buffering across the exchange?

@Jensen Taylor 2018-12-07 11:46:24

If you wish to log all output to a file AND output it to a text file then you can do the following. It's a bit hacky but it works:

import logging
debug = input("Debug or not")
if debug == "1":
    logging.basicConfig(level=logging.DEBUG, filename='./OUT.txt')
    old_print = print
    def print(string):
        old_print(string)
        logging.info(string)
print("OMG it works!")

EDIT: Note that this does not log errors unless you redirect sys.stderr to sys.stdout

EDIT2: A second issue is that you have to pass 1 argument unlike with the builtin function.

EDIT3: See the code before to write stdin and stdout to console and file with stderr only going to file

import logging, sys
debug = input("Debug or not")
if debug == "1":
    old_input = input
    sys.stderr.write = logging.info
    def input(string=""):
        string_in = old_input(string)
        logging.info("STRING IN " + string_in)
        return string_in
    logging.basicConfig(level=logging.DEBUG, filename='./OUT.txt')
    old_print = print
    def print(string="", string2=""):
        old_print(string, string2)
        logging.info(string)
        logging.info(string2)
print("OMG")
b = input()
print(a) ## Deliberate error for testing

@cladmi 2013-07-30 09:07:29

To complete John T answer: https://stackoverflow.com/a/616686/395687

I added __enter__ and __exit__ methods to use it as a context manager with the with keyword, which gives this code

class Tee(object):
    def __init__(self, name, mode):
        self.file = open(name, mode)
        self.stdout = sys.stdout
        sys.stdout = self

    def __del__(self):
        sys.stdout = self.stdout
        self.file.close()

    def write(self, data):
        self.file.write(data)
        self.stdout.write(data)

    def __enter__(self):
        pass

    def __exit__(self, _type, _value, _traceback):
        pass

It can then be used as

with Tee('outfile.log', 'w'):
    print('I am written to both stdout and outfile.log')

@vontrapp 2017-01-21 21:12:29

I would move the __del__ functionality into __exit__

@cladmi 2017-01-23 20:18:54

Indeed, I think using __del__ is a bad idea. It should be moved to a 'close' function which is called in __exit__.

@user 2018-02-25 04:14:53

I wrote a full replacement for sys.stderr and just duplicated the code renaming stderr to stdout to make it also available to replace sys.stdout.

To do this I create the same object type as the current stderr and stdout, and forward all methods to the original system stderr and stdout:

import os
import sys
import logging

class StdErrReplament(object):
    """
        How to redirect stdout and stderr to logger in Python
        https://stackoverflow.com/questions/19425736/how-to-redirect-stdout-and-stderr-to-logger-in-python

        Set a Read-Only Attribute in Python?
        https://stackoverflow.com/questions/24497316/set-a-read-only-attribute-in-python
    """
    is_active = False

    @classmethod
    def lock(cls, logger):
        """
            Attach this singleton logger to the `sys.stderr` permanently.
        """
        global _stderr_singleton
        global _stderr_default
        global _stderr_default_class_type

        # On Sublime Text, `sys.__stderr__` is set to None, because they already replaced `sys.stderr`
        # by some `_LogWriter()` class, then just save the current one over there.
        if not sys.__stderr__:
            sys.__stderr__ = sys.stderr

        try:
            _stderr_default
            _stderr_default_class_type

        except NameError:
            _stderr_default = sys.stderr
            _stderr_default_class_type = type( _stderr_default )

        # Recreate the sys.stderr logger when it was reset by `unlock()`
        if not cls.is_active:
            cls.is_active = True
            _stderr_write = _stderr_default.write

            logger_call = logger.debug
            clean_formatter = logger.clean_formatter

            global _sys_stderr_write
            global _sys_stderr_write_hidden

            if sys.version_info <= (3,2):
                logger.file_handler.terminator = '\n'

            # Always recreate/override the internal write function used by `_sys_stderr_write`
            def _sys_stderr_write_hidden(*args, **kwargs):
                """
                    Suppress newline in Python logging module
                    https://stackoverflow.com/questions/7168790/suppress-newline-in-python-logging-module
                """

                try:
                    _stderr_write( *args, **kwargs )
                    file_handler = logger.file_handler

                    formatter = file_handler.formatter
                    terminator = file_handler.terminator

                    file_handler.formatter = clean_formatter
                    file_handler.terminator = ""

                    kwargs['extra'] = {'_duplicated_from_file': True}
                    logger_call( *args, **kwargs )

                    file_handler.formatter = formatter
                    file_handler.terminator = terminator

                except Exception:
                    logger.exception( "Could not write to the file_handler: %s(%s)", file_handler, logger )
                    cls.unlock()

            # Only create one `_sys_stderr_write` function pointer ever
            try:
                _sys_stderr_write

            except NameError:

                def _sys_stderr_write(*args, **kwargs):
                    """
                        Hides the actual function pointer. This allow the external function pointer to
                        be cached while the internal written can be exchanged between the standard
                        `sys.stderr.write` and our custom wrapper around it.
                    """
                    _sys_stderr_write_hidden( *args, **kwargs )

        try:
            # Only create one singleton instance ever
            _stderr_singleton

        except NameError:

            class StdErrReplamentHidden(_stderr_default_class_type):
                """
                    Which special methods bypasses __getattribute__ in Python?
                    https://stackoverflow.com/questions/12872695/which-special-methods-bypasses-getattribute-in-python
                """

                if hasattr( _stderr_default, "__abstractmethods__" ):
                    __abstractmethods__ = _stderr_default.__abstractmethods__

                if hasattr( _stderr_default, "__base__" ):
                    __base__ = _stderr_default.__base__

                if hasattr( _stderr_default, "__bases__" ):
                    __bases__ = _stderr_default.__bases__

                if hasattr( _stderr_default, "__basicsize__" ):
                    __basicsize__ = _stderr_default.__basicsize__

                if hasattr( _stderr_default, "__call__" ):
                    __call__ = _stderr_default.__call__

                if hasattr( _stderr_default, "__class__" ):
                    __class__ = _stderr_default.__class__

                if hasattr( _stderr_default, "__delattr__" ):
                    __delattr__ = _stderr_default.__delattr__

                if hasattr( _stderr_default, "__dict__" ):
                    __dict__ = _stderr_default.__dict__

                if hasattr( _stderr_default, "__dictoffset__" ):
                    __dictoffset__ = _stderr_default.__dictoffset__

                if hasattr( _stderr_default, "__dir__" ):
                    __dir__ = _stderr_default.__dir__

                if hasattr( _stderr_default, "__doc__" ):
                    __doc__ = _stderr_default.__doc__

                if hasattr( _stderr_default, "__eq__" ):
                    __eq__ = _stderr_default.__eq__

                if hasattr( _stderr_default, "__flags__" ):
                    __flags__ = _stderr_default.__flags__

                if hasattr( _stderr_default, "__format__" ):
                    __format__ = _stderr_default.__format__

                if hasattr( _stderr_default, "__ge__" ):
                    __ge__ = _stderr_default.__ge__

                if hasattr( _stderr_default, "__getattribute__" ):
                    __getattribute__ = _stderr_default.__getattribute__

                if hasattr( _stderr_default, "__gt__" ):
                    __gt__ = _stderr_default.__gt__

                if hasattr( _stderr_default, "__hash__" ):
                    __hash__ = _stderr_default.__hash__

                if hasattr( _stderr_default, "__init__" ):
                    __init__ = _stderr_default.__init__

                if hasattr( _stderr_default, "__init_subclass__" ):
                    __init_subclass__ = _stderr_default.__init_subclass__

                if hasattr( _stderr_default, "__instancecheck__" ):
                    __instancecheck__ = _stderr_default.__instancecheck__

                if hasattr( _stderr_default, "__itemsize__" ):
                    __itemsize__ = _stderr_default.__itemsize__

                if hasattr( _stderr_default, "__le__" ):
                    __le__ = _stderr_default.__le__

                if hasattr( _stderr_default, "__lt__" ):
                    __lt__ = _stderr_default.__lt__

                if hasattr( _stderr_default, "__module__" ):
                    __module__ = _stderr_default.__module__

                if hasattr( _stderr_default, "__mro__" ):
                    __mro__ = _stderr_default.__mro__

                if hasattr( _stderr_default, "__name__" ):
                    __name__ = _stderr_default.__name__

                if hasattr( _stderr_default, "__ne__" ):
                    __ne__ = _stderr_default.__ne__

                if hasattr( _stderr_default, "__new__" ):
                    __new__ = _stderr_default.__new__

                if hasattr( _stderr_default, "__prepare__" ):
                    __prepare__ = _stderr_default.__prepare__

                if hasattr( _stderr_default, "__qualname__" ):
                    __qualname__ = _stderr_default.__qualname__

                if hasattr( _stderr_default, "__reduce__" ):
                    __reduce__ = _stderr_default.__reduce__

                if hasattr( _stderr_default, "__reduce_ex__" ):
                    __reduce_ex__ = _stderr_default.__reduce_ex__

                if hasattr( _stderr_default, "__repr__" ):
                    __repr__ = _stderr_default.__repr__

                if hasattr( _stderr_default, "__setattr__" ):
                    __setattr__ = _stderr_default.__setattr__

                if hasattr( _stderr_default, "__sizeof__" ):
                    __sizeof__ = _stderr_default.__sizeof__

                if hasattr( _stderr_default, "__str__" ):
                    __str__ = _stderr_default.__str__

                if hasattr( _stderr_default, "__subclasscheck__" ):
                    __subclasscheck__ = _stderr_default.__subclasscheck__

                if hasattr( _stderr_default, "__subclasses__" ):
                    __subclasses__ = _stderr_default.__subclasses__

                if hasattr( _stderr_default, "__subclasshook__" ):
                    __subclasshook__ = _stderr_default.__subclasshook__

                if hasattr( _stderr_default, "__text_signature__" ):
                    __text_signature__ = _stderr_default.__text_signature__

                if hasattr( _stderr_default, "__weakrefoffset__" ):
                    __weakrefoffset__ = _stderr_default.__weakrefoffset__

                if hasattr( _stderr_default, "mro" ):
                    mro = _stderr_default.mro

                def __init__(self):
                    """
                        Override any super class `type( _stderr_default )` constructor, so we can 
                        instantiate any kind of `sys.stderr` replacement object, in case it was already 
                        replaced by something else like on Sublime Text with `_LogWriter()`.

                        Assures all attributes were statically replaced just above. This should happen in case
                        some new attribute is added to the python language.

                        This also ignores the only two methods which are not equal, `__init__()` and `__getattribute__()`.
                    """
                    different_methods = ("__init__", "__getattribute__")
                    attributes_to_check = set( dir( object ) + dir( type ) )

                    for attribute in attributes_to_check:

                        if attribute not in different_methods \
                                and hasattr( _stderr_default, attribute ):

                            base_class_attribute = super( _stderr_default_class_type, self ).__getattribute__( attribute )
                            target_class_attribute = _stderr_default.__getattribute__( attribute )

                            if base_class_attribute != target_class_attribute:
                                sys.stderr.write( "    The base class attribute `%s` is different from the target class:\n%s\n%s\n\n" % (
                                        attribute, base_class_attribute, target_class_attribute ) )

                def __getattribute__(self, item):

                    if item == 'write':
                        return _sys_stderr_write

                    try:
                        return _stderr_default.__getattribute__( item )

                    except AttributeError:
                        return super( _stderr_default_class_type, _stderr_default ).__getattribute__( item )

            _stderr_singleton = StdErrReplamentHidden()
            sys.stderr = _stderr_singleton

        return cls

    @classmethod
    def unlock(cls):
        """
            Detach this `stderr` writer from `sys.stderr` and allow the next call to `lock()` create
            a new writer for the stderr.
        """

        if cls.is_active:
            global _sys_stderr_write_hidden

            cls.is_active = False
            _sys_stderr_write_hidden = _stderr_default.write



class StdOutReplament(object):
    """
        How to redirect stdout and stderr to logger in Python
        https://stackoverflow.com/questions/19425736/how-to-redirect-stdout-and-stderr-to-logger-in-python

        Set a Read-Only Attribute in Python?
        https://stackoverflow.com/questions/24497316/set-a-read-only-attribute-in-python
    """
    is_active = False

    @classmethod
    def lock(cls, logger):
        """
            Attach this singleton logger to the `sys.stdout` permanently.
        """
        global _stdout_singleton
        global _stdout_default
        global _stdout_default_class_type

        # On Sublime Text, `sys.__stdout__` is set to None, because they already replaced `sys.stdout`
        # by some `_LogWriter()` class, then just save the current one over there.
        if not sys.__stdout__:
            sys.__stdout__ = sys.stdout

        try:
            _stdout_default
            _stdout_default_class_type

        except NameError:
            _stdout_default = sys.stdout
            _stdout_default_class_type = type( _stdout_default )

        # Recreate the sys.stdout logger when it was reset by `unlock()`
        if not cls.is_active:
            cls.is_active = True
            _stdout_write = _stdout_default.write

            logger_call = logger.debug
            clean_formatter = logger.clean_formatter

            global _sys_stdout_write
            global _sys_stdout_write_hidden

            if sys.version_info <= (3,2):
                logger.file_handler.terminator = '\n'

            # Always recreate/override the internal write function used by `_sys_stdout_write`
            def _sys_stdout_write_hidden(*args, **kwargs):
                """
                    Suppress newline in Python logging module
                    https://stackoverflow.com/questions/7168790/suppress-newline-in-python-logging-module
                """

                try:
                    _stdout_write( *args, **kwargs )
                    file_handler = logger.file_handler

                    formatter = file_handler.formatter
                    terminator = file_handler.terminator

                    file_handler.formatter = clean_formatter
                    file_handler.terminator = ""

                    kwargs['extra'] = {'_duplicated_from_file': True}
                    logger_call( *args, **kwargs )

                    file_handler.formatter = formatter
                    file_handler.terminator = terminator

                except Exception:
                    logger.exception( "Could not write to the file_handler: %s(%s)", file_handler, logger )
                    cls.unlock()

            # Only create one `_sys_stdout_write` function pointer ever
            try:
                _sys_stdout_write

            except NameError:

                def _sys_stdout_write(*args, **kwargs):
                    """
                        Hides the actual function pointer. This allow the external function pointer to
                        be cached while the internal written can be exchanged between the standard
                        `sys.stdout.write` and our custom wrapper around it.
                    """
                    _sys_stdout_write_hidden( *args, **kwargs )

        try:
            # Only create one singleton instance ever
            _stdout_singleton

        except NameError:

            class StdOutReplamentHidden(_stdout_default_class_type):
                """
                    Which special methods bypasses __getattribute__ in Python?
                    https://stackoverflow.com/questions/12872695/which-special-methods-bypasses-getattribute-in-python
                """

                if hasattr( _stdout_default, "__abstractmethods__" ):
                    __abstractmethods__ = _stdout_default.__abstractmethods__

                if hasattr( _stdout_default, "__base__" ):
                    __base__ = _stdout_default.__base__

                if hasattr( _stdout_default, "__bases__" ):
                    __bases__ = _stdout_default.__bases__

                if hasattr( _stdout_default, "__basicsize__" ):
                    __basicsize__ = _stdout_default.__basicsize__

                if hasattr( _stdout_default, "__call__" ):
                    __call__ = _stdout_default.__call__

                if hasattr( _stdout_default, "__class__" ):
                    __class__ = _stdout_default.__class__

                if hasattr( _stdout_default, "__delattr__" ):
                    __delattr__ = _stdout_default.__delattr__

                if hasattr( _stdout_default, "__dict__" ):
                    __dict__ = _stdout_default.__dict__

                if hasattr( _stdout_default, "__dictoffset__" ):
                    __dictoffset__ = _stdout_default.__dictoffset__

                if hasattr( _stdout_default, "__dir__" ):
                    __dir__ = _stdout_default.__dir__

                if hasattr( _stdout_default, "__doc__" ):
                    __doc__ = _stdout_default.__doc__

                if hasattr( _stdout_default, "__eq__" ):
                    __eq__ = _stdout_default.__eq__

                if hasattr( _stdout_default, "__flags__" ):
                    __flags__ = _stdout_default.__flags__

                if hasattr( _stdout_default, "__format__" ):
                    __format__ = _stdout_default.__format__

                if hasattr( _stdout_default, "__ge__" ):
                    __ge__ = _stdout_default.__ge__

                if hasattr( _stdout_default, "__getattribute__" ):
                    __getattribute__ = _stdout_default.__getattribute__

                if hasattr( _stdout_default, "__gt__" ):
                    __gt__ = _stdout_default.__gt__

                if hasattr( _stdout_default, "__hash__" ):
                    __hash__ = _stdout_default.__hash__

                if hasattr( _stdout_default, "__init__" ):
                    __init__ = _stdout_default.__init__

                if hasattr( _stdout_default, "__init_subclass__" ):
                    __init_subclass__ = _stdout_default.__init_subclass__

                if hasattr( _stdout_default, "__instancecheck__" ):
                    __instancecheck__ = _stdout_default.__instancecheck__

                if hasattr( _stdout_default, "__itemsize__" ):
                    __itemsize__ = _stdout_default.__itemsize__

                if hasattr( _stdout_default, "__le__" ):
                    __le__ = _stdout_default.__le__

                if hasattr( _stdout_default, "__lt__" ):
                    __lt__ = _stdout_default.__lt__

                if hasattr( _stdout_default, "__module__" ):
                    __module__ = _stdout_default.__module__

                if hasattr( _stdout_default, "__mro__" ):
                    __mro__ = _stdout_default.__mro__

                if hasattr( _stdout_default, "__name__" ):
                    __name__ = _stdout_default.__name__

                if hasattr( _stdout_default, "__ne__" ):
                    __ne__ = _stdout_default.__ne__

                if hasattr( _stdout_default, "__new__" ):
                    __new__ = _stdout_default.__new__

                if hasattr( _stdout_default, "__prepare__" ):
                    __prepare__ = _stdout_default.__prepare__

                if hasattr( _stdout_default, "__qualname__" ):
                    __qualname__ = _stdout_default.__qualname__

                if hasattr( _stdout_default, "__reduce__" ):
                    __reduce__ = _stdout_default.__reduce__

                if hasattr( _stdout_default, "__reduce_ex__" ):
                    __reduce_ex__ = _stdout_default.__reduce_ex__

                if hasattr( _stdout_default, "__repr__" ):
                    __repr__ = _stdout_default.__repr__

                if hasattr( _stdout_default, "__setattr__" ):
                    __setattr__ = _stdout_default.__setattr__

                if hasattr( _stdout_default, "__sizeof__" ):
                    __sizeof__ = _stdout_default.__sizeof__

                if hasattr( _stdout_default, "__str__" ):
                    __str__ = _stdout_default.__str__

                if hasattr( _stdout_default, "__subclasscheck__" ):
                    __subclasscheck__ = _stdout_default.__subclasscheck__

                if hasattr( _stdout_default, "__subclasses__" ):
                    __subclasses__ = _stdout_default.__subclasses__

                if hasattr( _stdout_default, "__subclasshook__" ):
                    __subclasshook__ = _stdout_default.__subclasshook__

                if hasattr( _stdout_default, "__text_signature__" ):
                    __text_signature__ = _stdout_default.__text_signature__

                if hasattr( _stdout_default, "__weakrefoffset__" ):
                    __weakrefoffset__ = _stdout_default.__weakrefoffset__

                if hasattr( _stdout_default, "mro" ):
                    mro = _stdout_default.mro

                def __init__(self):
                    """
                        Override any super class `type( _stdout_default )` constructor, so we can 
                        instantiate any kind of `sys.stdout` replacement object, in case it was already 
                        replaced by something else like on Sublime Text with `_LogWriter()`.

                        Assures all attributes were statically replaced just above. This should happen in case
                        some new attribute is added to the python language.

                        This also ignores the only two methods which are not equal, `__init__()` and `__getattribute__()`.
                    """
                    different_methods = ("__init__", "__getattribute__")
                    attributes_to_check = set( dir( object ) + dir( type ) )

                    for attribute in attributes_to_check:

                        if attribute not in different_methods \
                                and hasattr( _stdout_default, attribute ):

                            base_class_attribute = super( _stdout_default_class_type, self ).__getattribute__( attribute )
                            target_class_attribute = _stdout_default.__getattribute__( attribute )

                            if base_class_attribute != target_class_attribute:
                                sys.stdout.write( "    The base class attribute `%s` is different from the target class:\n%s\n%s\n\n" % (
                                        attribute, base_class_attribute, target_class_attribute ) )

                def __getattribute__(self, item):

                    if item == 'write':
                        return _sys_stdout_write

                    try:
                        return _stdout_default.__getattribute__( item )

                    except AttributeError:
                        return super( _stdout_default_class_type, _stdout_default ).__getattribute__( item )

            _stdout_singleton = StdOutReplamentHidden()
            sys.stdout = _stdout_singleton

        return cls

    @classmethod
    def unlock(cls):
        """
            Detach this `stdout` writer from `sys.stdout` and allow the next call to `lock()` create
            a new writer for the stdout.
        """

        if cls.is_active:
            global _sys_stdout_write_hidden

            cls.is_active = False
            _sys_stdout_write_hidden = _stdout_default.write

To use this you can just call StdErrReplament::lock(logger) and StdOutReplament::lock(logger) passing the logger you want to use to send the output text. For example:

import os
import sys
import logging

current_folder = os.path.dirname( os.path.realpath( __file__ ) )
log_file_path = os.path.join( current_folder, "my_log_file.txt" )

file_handler = logging.FileHandler( log_file_path, 'a' )
file_handler.formatter = logging.Formatter( "%(asctime)s %(name)s %(levelname)s - %(message)s", "%Y-%m-%d %H:%M:%S" )

log = logging.getLogger( __name__ )
log.setLevel( "DEBUG" )
log.addHandler( file_handler )

log.file_handler = file_handler
log.clean_formatter = logging.Formatter( "", "" )

StdOutReplament.lock( log )
StdErrReplament.lock( log )

log.debug( "I am doing usual logging debug..." )
sys.stderr.write( "Tests 1...\n" )
sys.stdout.write( "Tests 2...\n" )

Running this code, you will see on the screen:

enter image description here

And on the file contents:

enter image description here

If you would like to also see the contents of the log.debug calls on the screen, you will need to add a stream handler to your logger. On this case it would be like this:

import os
import sys
import logging

class ContextFilter(logging.Filter):
    """ This filter avoids duplicated information to be displayed to the StreamHandler log. """
    def filter(self, record):
        return not "_duplicated_from_file" in record.__dict__

current_folder = os.path.dirname( os.path.realpath( __file__ ) )
log_file_path = os.path.join( current_folder, "my_log_file.txt" )

stream_handler = logging.StreamHandler()
file_handler = logging.FileHandler( log_file_path, 'a' )

formatter = logging.Formatter( "%(asctime)s %(name)s %(levelname)s - %(message)s", "%Y-%m-%d %H:%M:%S" )
file_handler.formatter = formatter
stream_handler.formatter = formatter
stream_handler.addFilter( ContextFilter() )

log = logging.getLogger( __name__ )
log.setLevel( "DEBUG" )
log.addHandler( file_handler )
log.addHandler( stream_handler )

log.file_handler = file_handler
log.stream_handler = stream_handler
log.clean_formatter = logging.Formatter( "", "" )

StdOutReplament.lock( log )
StdErrReplament.lock( log )

log.debug( "I am doing usual logging debug..." )
sys.stderr.write( "Tests 1...\n" )
sys.stdout.write( "Tests 2...\n" )

Which would output like this when running:

enter image description here

While it would still saving this to the file my_log_file.txt:

enter image description here

When disabling this with StdErrReplament:unlock(), it will only restore the standard behavior of the stderr stream, as the attached logger cannot be never detached because someone else can have a reference to its older version. This is why it is a global singleton which can never dies. Therefore, in case of reloading this module with imp or something else, it will never recapture the current sys.stderr as it was already injected on it and have it saved internally.

@Attila Lendvai 2018-07-03 15:23:39

an amazing level of accidental complexity for duplicating a stream.

@John T 2009-03-05 21:24:48

I had this same issue before and found this snippet very useful:

class Tee(object):
    def __init__(self, name, mode):
        self.file = open(name, mode)
        self.stdout = sys.stdout
        sys.stdout = self
    def __del__(self):
        sys.stdout = self.stdout
        self.file.close()
    def write(self, data):
        self.file.write(data)
        self.stdout.write(data)
    def flush(self):
        self.file.flush()

from: http://mail.python.org/pipermail/python-list/2007-May/438106.html

@Ben Blank 2009-03-05 21:35:44

+1 for handling the sys.stdout reassignment internally so that you can end logging by deleting the Tee object

@Luke Stanley 2011-06-25 09:29:26

I'd add a flush to that. E.g: 'self.file.flush()'

@fodon 2012-02-03 22:09:05

logging module looks better.

@Kobor42 2012-07-16 08:21:55

I disagree about logging module. Excellent for some fiddling. Logging is too big for that.

@Fabio A. 2013-05-10 11:02:50

Thanks for this code!

@martineau 2013-05-14 18:36:36

Be sure to note the revised version in this follow-up to the linked discussion in the answer.

@martineau 2013-05-14 19:10:03

Something similar can be implemented as a contextmanager for added flexibility as illustrated in this answer.

@Yura Purbeev 2014-05-14 06:39:20

Nice idea, though i found it better to open files outside Tee class via with

@Nux 2014-07-14 10:02:15

That will NOT work. __del__ is not called until the end of execution. See stackoverflow.com/questions/6104535/…

@pihug12 2015-03-29 09:46:21

@Nux I'm using it this way: log1 = Tee('a.log', 'w') ; print '123' ; log1.__del__() and log2 = Tee('b.log', 'w') ; print '456' ; log2.__del__(). (See here for the whole code). Is it the right way or it should be corrected?

@user5359531 2017-03-10 20:03:31

@martineau looks like the link to that 'revised version' is now dead. Maybe if you still have a copy of the revision, submit it as a separate answer?

@martineau 2017-03-10 20:54:43

@user5359531: Try this link instead.

@user5359531 2017-03-10 20:56:00

@martineau unfortunately, that link is blocked at my workplace :(

@martineau 2017-03-10 21:09:31

@user5359531 OK, I've posted a copy for you.

@Avsecz 2018-10-16 21:30:43

@Nux For python 3, call self.file.flush() at the end of the .write() method. See also stackoverflow.com/questions/230751/….

@matthieu 2019-04-02 07:42:13

This solution will not log output generated by other calls than print() and sys.stdout.write(). Output generated by C code or Fortran wrapped in Python, will not be redirected to file, because they write directly to the stdout fd (1), and not the IOTextwrapper sys.stdout. @Jacob Gabrielson's method will work for that though, but look at the comments of his answer for edits of his proposal.

@martineau 2017-03-10 21:07:36

As per a request by @user5359531 in the comments under @John T's answer, here's a copy of the referenced post to the revised version of the linked discussion in that answer:

Issue of redirecting the stdout to both file and screen
Gabriel Genellina gagsl-py2 at yahoo.com.ar
Mon May 28 12:45:51 CEST 2007

    Previous message: Issue of redirecting the stdout to both file and screen
    Next message: Formal interfaces with Python
    Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]

En Mon, 28 May 2007 06:17:39 -0300, 人言落日是天涯,望极天涯不见家
<kelvin.you at gmail.com> escribió:

> I wanna print the log to both the screen and file, so I simulatered a
> 'tee'
>
> class Tee(file):
>
>     def __init__(self, name, mode):
>         file.__init__(self, name, mode)
>         self.stdout = sys.stdout
>         sys.stdout = self
>
>     def __del__(self):
>         sys.stdout = self.stdout
>         self.close()
>
>     def write(self, data):
>         file.write(self, data)
>         self.stdout.write(data)
>
> Tee('logfile', 'w')
> print >>sys.stdout, 'abcdefg'
>
> I found that it only output to the file, nothing to screen. Why?
> It seems the 'write' function was not called when I *print* something.

You create a Tee instance and it is immediately garbage collected. I'd
restore sys.stdout on Tee.close, not __del__ (you forgot to call the
inherited __del__ method, btw).
Mmm, doesn't work. I think there is an optimization somewhere: if it looks
like a real file object, it uses the original file write method, not yours.
The trick would be to use an object that does NOT inherit from file:

import sys
class TeeNoFile(object):
     def __init__(self, name, mode):
         self.file = open(name, mode)
         self.stdout = sys.stdout
         sys.stdout = self
     def close(self):
         if self.stdout is not None:
             sys.stdout = self.stdout
             self.stdout = None
         if self.file is not None:
             self.file.close()
             self.file = None
     def write(self, data):
         self.file.write(data)
         self.stdout.write(data)
     def flush(self):
         self.file.flush()
         self.stdout.flush()
     def __del__(self):
         self.close()

tee=TeeNoFile('logfile', 'w')
print 'abcdefg'
print 'another line'
tee.close()
print 'screen only'
del tee # should do nothing

--
Gabriel Genellina

@Alexander Lebedev 2009-03-05 21:31:47

What you really want is logging module from standard library. Create a logger and attach two handlers, one would be writing to a file and the other to stdout or stderr.

See Logging to multiple destinations for details

@wzzrd 2009-03-17 16:21:42

+1 for doing it without re-inventing the wheel

@anatoly techtonik 2012-12-13 21:43:05

Logging module doesn't record exceptions and other important output to stdout, which can be useful when analyzing logs on build server (for example).

@jfs 2015-07-23 23:25:24

logging module won't redirect output from system calls such as os.write(1, b'stdout')

@Status 2014-07-05 05:24:52

I know this question has been answered repeatedly, but for this I've taken the main answer from John T's answer and modified it so it contains the suggested flush and followed its linked revised version. I've also added the enter and exit as mentioned in cladmi's answer for use with the with statement. In addition, the documentation mentions to flush files using os.fsync() so I've added that as well. I don't know if you really need that but its there.

import sys, os

class Logger(object):
    "Lumberjack class - duplicates sys.stdout to a log file and it's okay"
    #source: https://stackoverflow.com/q/616645
    def __init__(self, filename="Red.Wood", mode="a", buff=0):
        self.stdout = sys.stdout
        self.file = open(filename, mode, buff)
        sys.stdout = self

    def __del__(self):
        self.close()

    def __enter__(self):
        pass

    def __exit__(self, *args):
        self.close()

    def write(self, message):
        self.stdout.write(message)
        self.file.write(message)

    def flush(self):
        self.stdout.flush()
        self.file.flush()
        os.fsync(self.file.fileno())

    def close(self):
        if self.stdout != None:
            sys.stdout = self.stdout
            self.stdout = None

        if self.file != None:
            self.file.close()
            self.file = None

You can then use it

with Logger('My_best_girlie_by_my.side'):
    print("we'd sing sing sing")

or

Log=Logger('Sleeps_all.night')
print('works all day')
Log.close()

@Mohammad ElNesr 2016-08-28 11:10:07

Many Thnaks @Status you solved my question (stackoverflow.com/questions/39143417/…). I will put a link to your solution.

@Status 2016-09-26 23:12:33

@MohammadElNesr I just realized an issue with the code when it's used with a with statement. I have fixed it and it now correctly closes at the end of a with block.

@ennetws 2018-11-06 08:03:29

This worked great for me, only needed to change mode to mode="ab" and in the write function self.file.write(message.encode("utf-8"))

@shx2 2013-05-14 19:52:43

Here is another solution, which is more general than the others -- it supports splitting output (written to sys.stdout) to any number of file-like objects. There's no requirement that __stdout__ itself is included.

import sys

class multifile(object):
    def __init__(self, files):
        self._files = files
    def __getattr__(self, attr, *args):
        return self._wrap(attr, *args)
    def _wrap(self, attr, *args):
        def g(*a, **kw):
            for f in self._files:
                res = getattr(f, attr, *args)(*a, **kw)
            return res
        return g

# for a tee-like behavior, use like this:
sys.stdout = multifile([ sys.stdout, open('myfile.txt', 'w') ])

# all these forms work:
print 'abc'
print >>sys.stdout, 'line2'
sys.stdout.write('line3\n')

NOTE: This is a proof-of-concept. The implementation here is not complete, as it only wraps methods of the file-like objects (e.g. write), leaving out members/properties/setattr, etc. However, it is probably good enough for most people as it currently stands.

What I like about it, other than its generality, is that it is clean in the sense it doesn't make any direct calls to write, flush, os.dup2, etc.

@Ben 2013-09-23 19:24:10

I would have init take *files not files, but otherwise, yes, this. None of the other solutions isolate the "tee" functionality without trying to solve other problems. If you want to put a prefix on everything you output, you can wrap this class in a prefix-writer class. (If you want to put a prefix on just one stream, you wrap a stream and hand it to this class.) This one also has the advantage that multifile([]) creates a file that ignores everything (like open('/dev/null')).

@timotree 2019-04-26 13:14:44

Why have _wrap here at all? Couldn't you copy the code in there into __getattr__ and it work the same?

@timotree 2019-04-26 13:56:17

@Ben actually multifile([]) creates a file that raises an UnboundLocalError whenever you call one of its methods. (res is returned without being assigned)

@sorin 2010-08-06 11:39:55

I wrote a tee() implementation in Python that should work for most cases, and it works on Windows also.

https://github.com/pycontribs/tendo

Also, you can use it in combination with logging module from Python if you want.

@Danny Staple 2012-11-28 13:26:11

Hmm - that link no longer works - anywhere else it can be found?

@n611x007 2013-04-09 04:10:36

wow, your package rocks, especially if you know how cumbersome the Windows console culture is but didn't give up to make it work!

@josianator 2011-08-26 15:04:46

None of the answers above really seems to answer the problem posed. I know this is an old thread, but I think this problem is a lot simpler than everyone is making it:

class tee_err(object):

 def __init__(self):
    self.errout = sys.stderr

    sys.stderr = self

    self.log = 'logfile.log'
    log = open(self.log,'w')
    log.close()

 def write(self, line):

    log = open(self.log,'a')
    log.write(line)
    log.close()   

    self.errout.write(line)

Now this will repeat everything to the normal sys.stderr handler and your file. Create another class tee_out for sys.stdout.

@Rob W 2012-07-25 11:04:06

A similar, better answer was posted over two years before this one: stackoverflow.com/a/616686. Your method is very expensive: Each call to tee=tee_err();tee.write('');tee.write('');... opens+closes a file for each write. See stackoverflow.com/q/4867468 and stackoverflow.com/q/164053 for arguments against this practice.

@cognitiaclaeves 2010-10-14 19:51:00

I'm writing a script to run cmd-line scripts. ( Because in some cases, there just is no viable substitute for a Linux command -- such as the case of rsync. )

What I really wanted was to use the default python logging mechanism in every case where it was possible to do so, but to still capture any error when something went wrong that was unanticipated.

This code seems to do the trick. It may not be particularly elegant or efficient ( although it doesn't use string+=string, so at least it doesn't have that particular potential bottle- neck ). I'm posting it in case it gives someone else any useful ideas.

import logging
import os, sys
import datetime

# Get name of module, use as application name
try:
  ME=os.path.split(__file__)[-1].split('.')[0]
except:
  ME='pyExec_'

LOG_IDENTIFIER="uuu___( o O )___uuu "
LOG_IDR_LENGTH=len(LOG_IDENTIFIER)

class PyExec(object):

  # Use this to capture all possible error / output to log
  class SuperTee(object):
      # Original reference: http://mail.python.org/pipermail/python-list/2007-May/442737.html
      def __init__(self, name, mode):
          self.fl = open(name, mode)
          self.fl.write('\n')
          self.stdout = sys.stdout
          self.stdout.write('\n')
          self.stderr = sys.stderr

          sys.stdout = self
          sys.stderr = self

      def __del__(self):
          self.fl.write('\n')
          self.fl.flush()
          sys.stderr = self.stderr
          sys.stdout = self.stdout
          self.fl.close()

      def write(self, data):
          # If the data to write includes the log identifier prefix, then it is already formatted
          if data[0:LOG_IDR_LENGTH]==LOG_IDENTIFIER:
            self.fl.write("%s\n" % data[LOG_IDR_LENGTH:])
            self.stdout.write(data[LOG_IDR_LENGTH:])

          # Otherwise, we can give it a timestamp
          else:

            timestamp=str(datetime.datetime.now())
            if 'Traceback' == data[0:9]:
              data='%s: %s' % (timestamp, data)
              self.fl.write(data)
            else:
              self.fl.write(data)

            self.stdout.write(data)


  def __init__(self, aName, aCmd, logFileName='', outFileName=''):

    # Using name for 'logger' (context?), which is separate from the module or the function
    baseFormatter=logging.Formatter("%(asctime)s \t %(levelname)s \t %(name)s:%(module)s:%(lineno)d \t %(message)s")
    errorFormatter=logging.Formatter(LOG_IDENTIFIER + "%(asctime)s \t %(levelname)s \t %(name)s:%(module)s:%(lineno)d \t %(message)s")

    if logFileName:
      # open passed filename as append
      fl=logging.FileHandler("%s.log" % aName)
    else:
      # otherwise, use log filename as a one-time use file
      fl=logging.FileHandler("%s.log" % aName, 'w')

    fl.setLevel(logging.DEBUG)
    fl.setFormatter(baseFormatter)

    # This will capture stdout and CRITICAL and beyond errors

    if outFileName:
      teeFile=PyExec.SuperTee("%s_out.log" % aName)
    else:
      teeFile=PyExec.SuperTee("%s_out.log" % aName, 'w')

    fl_out=logging.StreamHandler( teeFile )
    fl_out.setLevel(logging.CRITICAL)
    fl_out.setFormatter(errorFormatter)

    # Set up logging
    self.log=logging.getLogger('pyExec_main')
    log=self.log

    log.addHandler(fl)
    log.addHandler(fl_out)

    print "Test print statement."

    log.setLevel(logging.DEBUG)

    log.info("Starting %s", ME)
    log.critical("Critical.")

    # Caught exception
    try:
      raise Exception('Exception test.')
    except Exception,e:
      log.exception(str(e))

    # Uncaught exception
    a=2/0


PyExec('test_pyExec',None)

Obviously, if you're not as subject to whimsy as I am, replace LOG_IDENTIFIER with another string that you're not like to ever see someone write to a log.

@Denis Barmenkov 2010-05-15 19:46:49

another solution using logging module:

import logging
import sys

log = logging.getLogger('stdxxx')

class StreamLogger(object):

    def __init__(self, stream, prefix=''):
        self.stream = stream
        self.prefix = prefix
        self.data = ''

    def write(self, data):
        self.stream.write(data)
        self.stream.flush()

        self.data += data
        tmp = str(self.data)
        if '\x0a' in tmp or '\x0d' in tmp:
            tmp = tmp.rstrip('\x0a\x0d')
            log.info('%s%s' % (self.prefix, tmp))
            self.data = ''


logging.basicConfig(level=logging.INFO,
                    filename='text.log',
                    filemode='a')

sys.stdout = StreamLogger(sys.stdout, '[stdout] ')

print 'test for stdout'

@blokeley 2010-02-07 09:20:32

As described elsewhere, perhaps the best solution is to use the logging module directly:

import logging

logging.basicConfig(level=logging.DEBUG, filename='mylog.log')
logging.info('this should to write to the log file')

However, there are some (rare) occasions where you really want to redirect stdout. I had this situation when I was extending django's runserver command which uses print: I didn't want to hack the django source but needed the print statements to go to a file.

This is a way of redirecting stdout and stderr away from the shell using the logging module:

import logging, sys

class LogFile(object):
    """File-like object to log text using the `logging` module."""

    def __init__(self, name=None):
        self.logger = logging.getLogger(name)

    def write(self, msg, level=logging.INFO):
        self.logger.log(level, msg)

    def flush(self):
        for handler in self.logger.handlers:
            handler.flush()

logging.basicConfig(level=logging.DEBUG, filename='mylog.log')

# Redirect stdout and stderr
sys.stdout = LogFile('stdout')
sys.stderr = LogFile('stderr')

print 'this should to write to the log file'

You should only use this LogFile implementation if you really cannot use the logging module directly.

@Atlas1j 2009-03-15 18:49:54

(Ah, just re-read your question and see that this doesn't quite apply.)

Here is a sample program that makes uses the python logging module. This logging module has been in all versions since 2.3. In this sample the logging is configurable by command line options.

In quite mode it will only log to a file, in normal mode it will log to both a file and the console.

import os
import sys
import logging
from optparse import OptionParser

def initialize_logging(options):
    """ Log information based upon users options"""

    logger = logging.getLogger('project')
    formatter = logging.Formatter('%(asctime)s %(levelname)s\t%(message)s')
    level = logging.__dict__.get(options.loglevel.upper(),logging.DEBUG)
    logger.setLevel(level)

    # Output logging information to screen
    if not options.quiet:
        hdlr = logging.StreamHandler(sys.stderr)
        hdlr.setFormatter(formatter)
        logger.addHandler(hdlr)

    # Output logging information to file
    logfile = os.path.join(options.logdir, "project.log")
    if options.clean and os.path.isfile(logfile):
        os.remove(logfile)
    hdlr2 = logging.FileHandler(logfile)
    hdlr2.setFormatter(formatter)
    logger.addHandler(hdlr2)

    return logger

def main(argv=None):
    if argv is None:
        argv = sys.argv[1:]

    # Setup command line options
    parser = OptionParser("usage: %prog [options]")
    parser.add_option("-l", "--logdir", dest="logdir", default=".", help="log DIRECTORY (default ./)")
    parser.add_option("-v", "--loglevel", dest="loglevel", default="debug", help="logging level (debug, info, error)")
    parser.add_option("-q", "--quiet", action="store_true", dest="quiet", help="do not log to console")
    parser.add_option("-c", "--clean", dest="clean", action="store_true", default=False, help="remove old log file")

    # Process command line options
    (options, args) = parser.parse_args(argv)

    # Setup logger format and output locations
    logger = initialize_logging(options)

    # Examples
    logger.error("This is an error message.")
    logger.info("This is an info message.")
    logger.debug("This is a debug message.")

if __name__ == "__main__":
    sys.exit(main())

@meatvest 2009-12-08 09:42:34

Good answer. I saw some really convoluted ways of replicating logging to the console, but making a StreamHandler with stderr was the answer I've been looking for :)

@emem 2019-08-29 09:12:38

The code is nice put it does not answer the question - this outputs the log into a file and stderr, the original question was asking to duplicate the stderr into a log file.

@Triptych 2009-03-05 21:21:21

The print statement will call the write() method of any object you assign to sys.stdout.

I would spin up a small class to write to two places at once...

import sys

class Logger(object):
    def __init__(self):
        self.terminal = sys.stdout
        self.log = open("log.dat", "a")

    def write(self, message):
        self.terminal.write(message)
        self.log.write(message)  

sys.stdout = Logger()

Now the print statement will both echo to the screen and append to your log file:

# prints "1 2" to <stdout> AND log.dat
print "%d %d" % (1,2)

This is obviously quick-and-dirty. Some notes:

  • You probably ought to parametize the log filename.
  • You should probably revert sys.stdout to <stdout> if you won't be logging for the duration of the program.
  • You may want the ability to write to multiple log files at once, or handle different log levels, etc.

These are all straightforward enough that I'm comfortable leaving them as exercises for the reader. The key insight here is that print just calls a "file-like object" that's assigned to sys.stdout.

@Devin Jeanpierre 2009-03-05 21:23:14

Exactly what I was going to post, pretty much. +1 when you fix the problem with write not having a self argument. Also, it'd be better design to have the file that you're going to write to passed in. Hell, it might also be better design to have stdout passed in.

@Triptych 2009-03-05 21:25:05

@Devin, yeah this was quick and dirty, I'll make some notes for possible early improvements.

@John T 2009-03-05 21:27:16

much cleaner than the snippet i use, good answer.

@tgray 2009-03-05 21:33:52

John T's snippet brings up a good point though. When the Logger gets deleted you should set stdout back to the default.

@drue 2009-03-05 21:53:14

I selected this answer too soon. It works great for "print", but not so much for external command output.

@blokeley 2010-03-26 07:57:36

The Logger class should also define a flush() method such as "def flush(): self.terminal.flush(); self.log.flush()"

@jpo38 2015-07-23 12:41:47

You say The print statement will call the write() method of any object you assign to sys.stdout. And what about other functions sending data to stdout not using print. For instance, if I create a process using subprocess.call its output goes to the console but not to log.dat file...is there a way to fix that?

@z3ntu 2015-10-26 10:18:05

@blokeley That's what I needed :)

@Vasin Yuriy 2019-10-28 08:39:11

Excelent solution, but I got error - no flush method. How to call to parent 'flush' method? Shoud it be defined in constructor?

Related Questions

Sponsored Content

28 Answered Questions

[SOLVED] How do I check if a list is empty?

  • 2008-09-10 06:20:11
  • Ray Vega
  • 2514314 View
  • 3235 Score
  • 28 Answer
  • Tags:   python list

45 Answered Questions

[SOLVED] How to make a flat list out of list of lists?

28 Answered Questions

[SOLVED] How to read a file line-by-line into a list?

16 Answered Questions

[SOLVED] How do I copy a file in Python?

21 Answered Questions

[SOLVED] How do I list all files of a directory?

  • 2010-07-08 19:31:22
  • duhhunjonn
  • 3704643 View
  • 3474 Score
  • 21 Answer
  • Tags:   python directory

41 Answered Questions

[SOLVED] How do I merge two dictionaries in a single expression?

17 Answered Questions

[SOLVED] How to make a chain of function decorators?

25 Answered Questions

[SOLVED] How can I safely create a nested directory?

37 Answered Questions

[SOLVED] How do I check whether a file exists without exceptions?

34 Answered Questions

[SOLVED] How do I sort a dictionary by value?

Sponsored Content