If you're tailing logs, can I suggest that you try out the Logfile Navigator (https://lnav.org). It really is possible to do better than tail/less/whatever when you just want to look at a local log file:
If you want... a fraction of the features, but still a nice experience, and you want a viewer in your editor, and if your editor is Emacs, try Logview mode - https://github.com/doublep/logview.
I mention it both because I find it very useful and because, seeing Logfile Navigator, I now see that it desperately needs to have more features :).
I think inline pretty-print of JSON/XML/YAML in a log entry.. plus histogram of date/time.. I do these things a lot using various grep/sort/uniq etc pipelines
How does it handle merging log files that are in different time zones? I wrote a small script to merge log files from lots of different network appliances a long time ago, it was extremely useful for debugging problems that occurred across our distributed system. The most unfortunate part was that the appliances all logged in local time and didn't include their UTC offsets in the timestamps :(
Timezones are not automatically handled by lnav at the moment, which isn't great. Timestamps are parsed and treated as UTC internally, which usually works out fine. You can manually adjust all of the timestamps for a file using the ":adjust-log-time" command:
Thanks, I see. At least with `:adjust-log-time` you can make it work!
With my old log merging program, you had to supply a regex with groups for the different timezone components and optionally a UTC offset. That worked really well but was a pain to set up. Typically I was using it to look at the same format of files all the time though, so in practice it wasn't that bad.
I'm not really a C/C++ person but maybe I'll try and hack on lnav a bit and see if I can figure out how to add timezone support.
I recommend using `less <FILE>` <shift-F> rather than plain `tail -f <FILE>` because you can ctrl-c to stop the tailing and then search the text using /. You can then resume tailing again later with another <shift-F>.
I love to press enter a few times to add a few empty lines to serve as a marker. Plus, I set my terminal scrollback buffer to 10000 lines (not a problem nowadays), so cmd-F in the terminal works pretty well. And I prefer the natural/kinetic scrolling of the terminal backlog to less as a fullscreen tty program.
Finally, as the article alludes to, tail can monitor multiple files at once.
Yeah, right? It's not even really a "feature" of anything, just an artifact of how teletype support was originally implemented in UNIX, and yet I've been hitting the enter key for my own "marks" for decades.
less has its own system of bookmarks: use m followed by a lowercase letter to set a mark and single quote followed by a single letter to jump to that mark. I assume that this means we can have up to 26 bookmarks, though I normally don't go much beyond two.
How many times have you seen the last line of a file you're tailing be something like "00:00 Logfile turned over." Both less and tail stop doing their business when a log is rotated and the inode changes. Stop that by adding or extending an alias:
alias less='less --follow-name' # still requires <shift>-f to follow
alias tail='tail -F'
I use tail, head, tac, grep, watch etc all the time to pick out stuff I'm looking for, sometimes I use tail -f, but it's certainly a minority of my tail invocations.
Okay i wasn't interested in the `less` usage in the GP comment because i use tmux to stop and view logs as they're passing by. However automatically opening in my CLI editor? That sounds sexy as hell!
Use "-S" to toggle the "--chop-long-lines" option at invocation or in interactive mode. This mode shows only one line on the screen for each line in a file and is helpful when viewing a log with entries much longer than the screen width. When lines are chopped less(1) lets you scroll horizontally with <LEFTARROW>/<RIGHTARROW>. Chopped mode also works in tail mode ("F" command or "+F" option).
As others have mentioned the search "/pattern" and filter "&pattern" commands are awesome and they also work in tail mode.
The -J option toggles the status column which is helpful when using search patterns or paging around a file.
Since less(1) has lots of features and navigation methods it’s good to remember the "h" command to show the help page.
You can stop output on a terminal (emulator) by exercising flow control. If you get overwhelmed by `tail` (or a similar tool's) output, hit ^S. If you are ready to resume, strike ^Q.
uh... doesn't watch just run the command over and over? I would have thought this would just print the entire file every n seconds. Now using tail with that instead of cat....
One of my recently learned tail tricks (which may be very obvious to most of you already but it’s new to me) is to tail -f /proc/<pid>/fd/1 (or 2 if you want to see stderr)
I have some scripts that, for example call zcat file.tar.gz | dd of=/dev/mmcblk1 bs=1M status= progress (that last bit it important if your version of dd supports it)
This way I can watch the output of dd in another console if I wanted. I keep finding other uses for it now.
Exact same command no matter what grep -e you have. You separate the filtering logic from the accessing logic. You can work on one side without worrying about the other. Want to change from cat to bzcat, or tail, or tac, or nc? No problem. Want to add a pre-filter (say strip snmpd errors before you add -n to number your matches in grep), it all stays the same.
Cognative break, extra typing, potentially having to look at what you're typing too rather than muscle memory, all because someone says "cat is useless". My computer isn't from 1965, I don't need to justify every process I run.
I used it as a solution to allow me to trigger backups of minecraft worlds from within the game server my son and I share. Anything you put to the 'say' command is written to the game's log. I use
I never type the letter n as in "-n 5" when using tail or head. I just replace n with the number, e.g., "-5". Its a habit I never broke because I still haven't encountered a situation where it does not work.
Had an industrial plant that used these very dated VBScript based controllers. There was a SQL Server for numeric logs, but nothing for textual logs and therefore no transparency into the system. The textual logs themselves were stored on the devices in a statically sized array, so if the machine rebooted or a lot of logs happened at once (say at interesting times like startup or during errors), it was common to lose any interesting logs.
Spent a while trying to come up with a zero-budget solution for this and during a literal shower thought "Why don't I just tail a log file?" I managed to alter the controllers code to transmit recently written log messages to a minimal server which would just prepend timestamp and origin to the message and append the line to a file. Then each control console would have a Windows-built tail binary that read the file over Samba, wrap it in a batch file that would customize the colors and text a bit, and Plant Tail was born.
The mostly computer-illiterate operators quickly fell in love with it as it provided an amazing degree of transparency to what they were used to and demanded an equivalent utility for all future deployments.
TLDR: Don't rule out simple solutions if you can leverage already existing infrastructure.
I wish tail had an -F -like option, to do globbing itself. When I want to tail any file following pattern mylog.*, including file created /after/ the tail command was started...
servers=(foo bar)
pipes=("${servers[@]/%/.pipe}")
mkfifo "${pipes[@]}"
tail -F "${pipes[@]}" &
for p in "${pipes[@]}"; do
true > "$p"
done
for s in "${servers[@]}"; do
ssh "$s" tail -F file.log > "${s/%/.pipe}" &
done
%tail
There's timing issues on this alternative. I just ran each statement separately in a prompt, and luckily it worked. The first `true` statement needs to run after `tail` has started trying to read the first argument, and once that dies, the second run of `true` needs to run after `tail` starts reading the second argument. Last time I did something like this, I just typed the statements without the loops and ran by hand.
Sorry to plug lnav again, but if the data volume is not "too much"[1], the latest version of lnav has support for tailing files on remote hosts that are accessible via ssh:
I tried but my timestamp is of the form %Y%m%d.%H%M%S:
Do you have an option to just specify the timestamp format?
If not, can you give an example of defining own format?
Many thanks
You can ssh host1 tail -F log | sed -e s/^/host1:/
But you might eventually want to log everything to a third host. Use syslog to send logs across the network and configure your local software to log to syslog.
When you have 10k hosts then sampling 1 in N log lines is still useful as a canary, while avoiding being a firehouse for the central log sampler. If something heinous occurs you can get the unsampled log from the original machine, if you need it.
Much more sophisticated techniques abound, and others will comment I’m sure, but it’s also nice to have the syslog trail as a fallback. You never know in advance what’s going to fail and syslog will show you everything from your Apache logs to kernel bugs and faulty NICs.
> You can ssh host1 tail -F log | sed -e s/^/host1:/
That's only for one host. If you mean something akin to:
tail -F <(ssh ...) <(ssh ...)
that wouldn't work, because `tail` would wait for the first `ssh/sed` to close their output (to reach the end of the first pipe) before starting to read the second `ssh/sed`'s output.
Honestly Loki and Promtail are so easy to setup, and jump into not just tailing and searching but incorporating logs into Grafana dashboards (you are using Grafana already to monitor host and service metrics, right?), that they make sense to run for everything but the smallest scale deployment (that is, a single service on a single node).
You often want to have some indicator that you are tailing the log rather than simply having the log been output to standard output directly. A colleague of mine once Ctrl-C'd a trading server which he thought was the tail of the servers log. This happened at an extremely embarrassing moment, and after he had spent hours rebuilding all the trades in the databases, and being constantly shouted at by irate traders, he had to take a couple of days off to recover.
So, my advice is always make direct server output stand out in some way (colour, whatever). And, of course think about trapping signals, though this is often not as easy as it might seem.
In the traditional unix/posix world of cli applications interactivity is a bit "frowned upon". I really wouldn't mind if an application of this caliber would trap ctrl+c and display an "ARE YOU SURE?" message.
Same goes for coreutils. I wouldn't mind an "OK" after a cp or mv so I don't have to type echo $? to confirm it didn't crash or something
I think Using Pipes in command lines is one of the best CS Invention. I feel that drawing parallels with Unix Philosophies, design and architecture can give many good ideas for non unix architectures as well.
I used to view all of my server request logs in one stream with tail, but I didn't like tail's output because it broke it into many lines with "header lines" in between, so I wrote a small PHP script[0] to tail every file separately and output each line with a (shortened) prefix of the filename. Maybe it'll be useful to someone (who has PHP on their system).
I use both! `tail` in one window for the log file from whatever program I am running. And `watch -n 0.1 ls -lh` in another to keep an eye on the file I am writing to. Sometimes I'll put a watch on the harddrive just to make sure I am not filling it gigabytes of crap.
I would love to see a FUSE filesystem for Kubernetes, CloudWatch, Data Dog, etc. Maybe something that lets you set up the folders/files based on your preferred facets, then read your logs with tail/less/grep as you like. It would be so much faster than clicking & scrolling in their web UIs!
I wish I could set a tail -F to watch a log file that hasn't been generated yet. It seems to complain about the missing file even after its appeared in the working directory.
I just want to come back and say when I tested this with tail -F just now, it did NOT start actually tailing when the log file was generated. tail (GNU coreutils) 8.22
root-tail made me a Linux user. E DR11 helped, but it was the ease and expectation that I was supposed to be able to monitor the various system logs that convinced me.
https://lnav.org/2013/09/10/competing-with-tail.html