Useful Linux(Unix) Commands


Useful Linux/Unix Commands

find [pathnames] [conditions]
Conditions may be grouped by enclosing them in \( \) (escaped parentheses), negated with ! (use \! in the C shell), given as alternatives by separating them with -o, or repeated (adding restrictions to the match; usually only for -name, -type, -perm).
Conditions and actions
-atime(ctime,mtime) +n | -n | n, -amin(cmin,mmin)+n | -n | n
Find files last accessed,changed,modified more than n (+n), less than n (-n), or exactly n days ago or n minutes ago.
-anewer file, -cnewer file
Find files that were accessed,changed after file was last modified.
-exec command { } \;
Run the Linux command, from the starting directory on each file matched by find, When command runs, the argument { } substitutes the current file. Follow the entire sequence with an escaped semicolon (\;).
-maxdepth num Do not descend more than num levels of directories.
-mindepth num Begin applying tests and actions only at levels deeper than num levels.
-user user,-group gname
-name pattern
Find files whose names match pattern. Filename metacharacters may be used but should be escaped or quoted.
-newer file
-ok command { }\;
Same as -exec but prompts user to respond with y before command is executed.
-perm nnn
-print Print the matching files and directories, using their full pathnames. Return true.
-size n[c] Find files containing n blocks, or if c is specified, n characters long.
-type c
c can be b (block special file), c (character special file), d (directory), p (fifo or named pipe), l (symbolic link), s (socket), or f (plain file).
-false Return false value for each file encountered.
-ilname(iname,ipath,iregex) pattern: A case-insensitive version of -lname,-name,-path,-regex.
-lname pattern
Search for files that are symbolic links, pointing to files named pattern.
-nouser The file's user ID does not correspond to any user.
-nogroup The file's group ID does not correspond to any group.
-path pattern
find files whose names match pattern. Expect full pathnames relative to the starting pathname.
Examples:
find / -type d -name 'man*' -user ann -print
Find and compress files whose names don't end with .gz:
gzip `find . \! -name '*.gz' -print`
Search the system for files that were modified within the last two days
find / -mtime -2 -print
Recursively grep for a pattern down a directory tree:
find /book -print | xargs grep '[Nn]utshell'
Remove all empty files under current directory (prompting first):
find . -size 0 -ok rm {} \;
find . -name "*.txt" -ok mv {} junkdir \;
find / -name core | xargs /bin/rm -f
find / -name core -exec /bin/rm -f '{}' \; # same thing
find / -name core -delete # same if using Gnu find
#find all files newer than FILE, and delete
find /dir -type f -newer /path/to/FILE -exec rm \{\} \;
#find all files newer than 1 day, and tar them to file.tar
find /dir -type f -mtime -1 | tar -c -T - -f file.tar
Find all old string and replace them with new value in all files
find . -name '*.java' -exec sed -i 's/net.jcip.examples/concurrency/g' '{}' \;
find . -type f \( -name "*.sh" -o -name "*.java" \)
find / -type f -mtime -7 | xargs tar -rf weekly_incremental.tar
show only directory:
ls -al /root | grep '^d'
find /root -type d
find ./ -name "f[Oo][Oo]" -print
find . -name '*.log' -exec grep -Hn 'ERROR' {} \;
Find all the hidden files
find . -type f -name ".*"
Finding the Top 5 Big Files
find . -type f -exec ls -s {} \; | sort -n -r | head -5
find . -name \*.log -exec tail {} \;


Secure Copy with scp
scp name-of-source name-of-destination
Format of name-of-source and name-of-destination:
username@hostname:dir_path_to_file_or_directory
Field Default for Local Host Default for Remote Host
Username(Followed by @) Invoking user’s username Same
Hostname(Followed by :) Localhost Localhost
Port number(Preceded by #) 22 22
Directory path Current (invoking) directory Username’s remote home directory
Handling of Wildcards
scp for SSH1 and OpenSSH has no special support for wildcards in filenames. It simply lets the shell expand them:
scp *.txt server.example.com:
Wildcards in remote file specifications are evaluated on thelocal machine, not the remote.
scp1 server.example.com:*.txt . Bad idea!
Always escape your wildcards so they are explicitly ignored by the shell and passed to scp1:
scp1 server.example.com:\*.txt .
scp2 does its own regular expression matching after shell-wildcard expansion is complete.
-r Recursive Copy of Directories
scp -r /usr/local/bin server.example.com:
Clever way to eecursively copr directories
tar cf - /usr/local/bin | ssh server.example.com tar xf -
-p Preserving Permissions
-u Automatic Removal of Original File
Safety Features(Only available on scp2)
d option ensures that name-of-destination is a directory.
n option instructs the program to describe its actions but not perform any copying.

ftp [options] [hostname]
-d Enable debugging.
-g Disable filename globbing.
-i Turn off interactive prompting.
-v Verbose. Show all responses from remote server.
Commands
![command [args]]
Invoke an interactive shell on the local machine.
append local-file [remote-file]
Append a local file to a file on the remote machine.
ascii, binary
cd remote-directory
cdup Change working directory of remote machine to its parent directory.
chmod [mode] [remote-file]
delete remote-file
get remote-file [local-file]
glob
Toggle filename expansion for mdelete, mget, and mput. If globbing is turned off, the filename arguments are taken literally and not expanded.
lcd [directory] Change working directory on local machine.
ls [remote-directory]
mkdir remote-directory-name
mget remote-files
mput [local-files]
mdelete remote-files
newer remote-file [local-file] Get file if remote file is newer than local file.
open host [port]
put local-file [remote-file]
pwd Print name of the current working directory on the remote machine.
reget remote-file [local-file]
Retrieve a file (like get), but restart at the end of local-file. Useful for restarting a dropped transfer.
rename [from] [to]
Rename file from on remote machine to to.
rmdir remote-directory-name]
Delete a directory on the remote machine.
send local-file [remote-file] Synonym for put.
system Show type of operating system running on remote machine.


Compress and decompress
tar [options] [tarfile] [other-files]
-c, --create Create a new archive.
-d, --compare Compare the files stored in tarfile with other-files.
-r, --append Append other-files to the end of an existing archive.
-t, --list Print the names of other-files if they are stored on the archive.
-u, --update Add files if not in the archive or if modified.
-x, --extract, --get Extract other-files from an archive.
-A, --catenate, --concatenate Concatenate a second tar file on to the end of the first.
Options
--atime-preserve Preserve original access time on extracted files.
--exclude=file Remove file from any list of files.
-k, --keep-old-files
When extracting files, do not overwrite files with similar names. Instead, print an error message.
-m, --modification-time
Do not restore file modification times; update them to the time of extraction.
-p, --same-permissions, --preserve-permissions
Keep ownership of extracted files same as that of original permissions.
--remove-files
Remove originals after inclusion in archive.
-K file, --starting-file file Begin tar operation at file file in archive.
-N date, --after-date date Ignore files older than date.
-O, --to-stdout Print extracted files on standard out.
Examples
tar cvf /dev/rmt0 /bin /usr/bin, tar cvf home.tar ~ *.txt, tar tvf home.tar, tar xvf home.tar ~/tmpdir
Create an archive of the current directory and store it in a file backup.tar:
tar cvf - `find . -print` > backup.tar
gzip [options] [files]
-c, --stdout, --to-stdout Print output to standard output, and do not change input files.
-d, --decompress, --uncompress
-f, --force
-r, --recursive
-n, --no-name
-S suffix, --suffix suffix
-t, --test Test compressed file integrity.
Samples:
gzip *, gunzip *, gzip -r dir
gzip -c file1 > foo.gz
gzip -c file2 >> foo.gz
cat file1 file2 | gzip > foo.gz
gzip -c file1 file2 > foo.gz
gunzip [options] [files]
zcat [options] [files]
Read one or more files that have been compressed with gzip or compress and write them to standard output. zcat is identical to gunzip -c and takes the options -fhLV.

grep [options] pattern [files]
Operator TypeExamplesDescription
Literal Characters
Match a character exactly
a A y 6 % @Letters, digits and many special
characters match exactly
\$ \^ \+ \\ \?Precede other special characters
with a \ to cancel their regex special meaning
\n \t \rLiteral new line, tab, return
Anchors and assertions^Starts with
$Ends with
Character groups
any 1 character from the group
[aAeEiou]any character listed from [ to ]
[^aAeEiou]any character except aAeEio or u
[a-fA-F0-9]any hex character (0 to 9 or a to f)
.any character at all
Counts
apply to previous element
+1 or more ("some")
egrep only
*0 or more ("perhaps some")
?0 or 1 ("perhaps a")
Alternation|either, or
egrep only
Grouping( )group for count and save to variable
egrep only
-c, --count
Print only a count of matched lines. With -v or --revert-match option, count nonmatching lines.
-e pattern, --regexp=pattern
-f file, --file=file Take a list of patterns from file, one per line.
-h, --no-filename Print matched lines but not filenames (inverse of -l).
-i, --ignore-case
-l, --files-with-matches
List the names of files with matches but not individual matched lines;
-n, --line-number Print lines and their line numbers.
-r, --recursive Recursively read all files under each directory. Same as -d recurse.
-v, --revert-match Print all lines that don't match pattern.
-w, --word-regexp Match on whole words only.
-x, --line-regexp Print lines only if pattern matches the entire line.
-L, --files-without-match List files that contain no matching lines.
Examples:
List the number of users who use bash:
grep -c /bin/bash /etc/passwd
List header files that have at least one #include directive:
grep -l '^#include' /usr/include/*
List files that don't contain pattern:
grep -c pattern files | grep :0
egrep [options] [regexp] [files]
egrep -n '(old|new)\.doc?' files
find . | grep freeze.*\.log
xargs [options] [command]
Execute command (with any initial arguments), but read remaining arguments from standard input instead of specifying them directly. xargs allows command to process more arguments than it could normally handle at once.
Xargs' power lies in the fact that it can take the output of one command(ls or find), in this case find, and use that output as arguments to another command.
-a Read from file instead of stdin
-d Custom delimiters
-i[string], --replace[=string]
Edit all occurrences of , or string, to the names read in on standard input. Unquoted blanks are not considered argument terminators.
-l[lines], --max-lines[=lines] -L lines
Allow no more than 1, or lines, nonblank input lines on the command line, Using this we can concatenate n lines into one.
-n args, --max-args=args
Allow no more than args arguments on the command line. May be overridden by -s.
-p, --interactive Prompt for confirmation before running each command line. Implies -t.
-P max, --max-procs=max
Allow no more than max processes to run at once. The default is 1. A maximum of 0 allows as many as possible to run at once.
-r, --no-run-if-empty
Tell xargs when to Quit,do not run command if standard input contains only blanks.
-s max, --max-chars=max Allow no more than max characters per command line.
-t, --verbose Verbose mode. Print command line on standard error before executing.
-x, --exit If the maximum size (as specified by -s) is exceeded, exit.
Examples
find / -print | xargs grep pattern > out &
Run diff on file pairs (e.g., f1.a and f1.b, f2.a and f2.b ...):
echo * | xargs -t -n2 diff
Display file, one word per line (same as deroff -w):
cat file | xargs -n1
ls olddir | xargs -i -t mv olddir/ newdir/
find ~ -type f -mtime +1825 | xargs -r ls -l
Lists all directories
find . -maxdepth 1 -type d -print | xargs echo
find . -maxdepth 1 -type d -print | xargs -t -i {} echo
xargs -a foo -d, -L 1 echo
echo "foo,bar,baz" | xargs -d, -L 1 echo
ls | xargs -L 4 echo, ls | xargs -l4 echo
wget
OPTIONS
-b, --background
Logging and Input File Options
-a logfile, --append-output=logfile
-d, --debug
-i file, --input-file=file
Read URLs from file, in which case no URLs need to be on the command line.
F, --force-html
When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existing HTML files on your local disk, by adding "" to HTML, or using the --base command-line option.
-B URL, --base=URL
When used in conjunction with -F, prepends URL to relative links in the file specified by -i.
Download Options
-t number, --tries=number
-O file,--output-document=file
-c, --continue Continue getting a partially-downloaded file.
-T seconds, --timeout=seconds
--limit-rate=amount
Directory Options
-nd, --no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering.
-x, --force-directories
The opposite of -nd---create a hierarchy of directories
-nH, --no-host-directories Disable generation of host-prefixed directories.
-P prefix, --directory-prefix=prefix
HTTP Options
--http-user=user, --http-passwd=password
Recursive Retrieval Options
-r,--recursive Turn on recursive retrieving.
-l depth, --level=depth Specify recursion maximum depth level depth.
-k, --convert-links
After the download is complete, convert the links in the document to make them suitable for local viewing.
-K, --backup-converted
When converting a file, back up the original version with a .orig suffix.
-p, --page-requisites
This option causes Wget to download all the files that are necessary to properly display a given HTML page.
Recursive Accept/Reject Options
-A acclist --accept acclist
-R rejlist --reject rejlist
Specify comma-separated lists of file name suffixes or patterns to accept or reject.
Examples:
Examples:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
wget --convert-links -r -p http://geronimo.apache.org/ -o log
wget -p --convert-links http://www.server.com/dir/page.html
Download multiple files on command line using wget
wget http://www.cyberciti.biz/download/lsst.tar.gz ftp://ftp.freebsd.org/pub/sys.tar.gz ftp://ftp.redhat.com/pub/xyz-1rc-i386.rpm
wget -cb -o /tmp/download.log -i /tmp/download.txt
nohup command [arguments]
The nohup command is used to start a command that will ignore hangup signals and will append stdout and stderr to a file.
TTY output is appended to either nohup.out or $HOME/nohup.out
The nohup command will not execute a pipeline or a command list. You can save a pipeline or list in a file and then run it using the sh (default shell) or the bash command.
tr [options] [string1 [string2]]
Translate characters—copy standard input to standard output, substituting characters from string1 to string2 or deleting characters in string1.
-c, --complement
Complement characters in string1 with respect to ASCII 001-377.
-d, --delete
Delete characters in string1 from output.
-s, --squeeze-repeats
Squeeze out repeated output characters in string2.
-t, --truncate-set1
Truncate string1 to the length of string2 before translating.
echo "12345678 9247" | tr 123456789 computerh
cat file | tr '[A-Z]' '[a-z]'
tr "set1" "set2" < input.txt
tr "set1" "set2" < input.txt > output.txt
Turn spaces into newlines (ASCII code 012):
tr ' ' '\012' < file
Strip blank lines from file and save in new.file(or use 011 to change successive tabs into one tab):
cat file | tr -s "" "\012" > new.file
tr -d : < file > new.file
tee [options] files
Accept output from another command and send it both to the standard output and to files.
-a, --append Append to files; do not overwrite.
ls -l | tee savefile
su [option] [user] [shell_args]
Create a shell with the effective user-ID user. If no user is specified, create a shell for a privileged user (that is, become a superuser). Enter EOF to terminate.
-, -l, --login
Go through the entire login sequence (i.e., change to user's environment).
-c command, --command=command
Execute command in the new shell and then exit immediately. If command is more than one word, it should be enclosed in quotes
su -c 'find / -name \*.c -print' nobody
-m, -p, --preserve-environment
Do not reset environment variables.
-s shell, --shell=shell
Execute shell, not the shell specified in /etc/passwd, unless shell is restricted.
sudo - execute a command as another user
-i
spwan a shell for specified user,The exit command (or CONTROL-D) terminates the spwaned shell, returning the user to his former shell and prompt.
-u
cause sudo to run the specified command as a user other than root. To specify a uid instead of a username, use #uid.
--
The -- flag indicates that sudo should stop processing command line arguments. It is most useful in conjunction with the -s flag.
udo shutdown -r +15 "quick reboot"

Misc
nl: number lines of files
tac [options] [file] prints files in reverse
Monitoring Programs
ps -efH
The -H parameter organizes the processes in a hierarchical format, showing which processes started which other processes.
Real-time process monitoring: top
Stopping processes:kill, killall
kill `ps -aef | grep dscli | awk '{print $2}'`
Linux Process Signals
Signal Name Description
1 HUP Hang up.
2 INT Interrupt.
3 QUIT Stop running.
9 KILL Unconditionally terminate.
11 SEGV Segment violation.
15 TERM Terminate if possible.
17 STOP Stop unconditionally, but don’t terminate.
18 TSTP Stop or pause, but continue to run in background.
19 CONT Resume execution after STOP or TSTP.
The generally accepted procedure is to first try the TERM signal. If the process ignores that, try the INT or HUP signals. If the program recognizes these signals, it’ll try to gracefully stop doing what it was doing before shutting down. The most forceful signal is the KILL signal. When a process receives this signal, it immediately stops running. This can lead to corrupt files.
The killall command
Kill processes by command name. If more than one process is running the specified command, kill all of them. Treat command names that contain a / as files; kill all processes that are executing that file.
-i Prompt for confirmation before killing processes.
Monitoring Disk Space
Mounting media:mount -t type device directory
By default, the mount command displays a list of media devices currently mounted on the system: mount
the types you’re most likely to run into are:
vfat: Windows long filesystem.
ntfs: Windows advanced filesystem used in Windows NT, XP, and Vista.
iso9660: The standard CD-ROM filesystem.
To manually mount the USB memory stick at device /dev/sdb1 at location /media/disk: mount -t vfat /dev/sdb1 /media/disk

The -o option allows you to mount the filesystem with a comma-separated list of additional
options. The popular options to use are:
ro: Mount as read-only.
rw: Mount as read-write.
user: Allow an ordinary user to mount the filesystem.
check=none: Mount the filesystem without performing an integrity check.
loop: Mount a file.
Mount iso file:
mkdir mnt; mount -t iso9660 -o loop MEPIS-KDE4-LIVE-DVD 32.iso mnt
/etc/fstab
List of filesystems to be mounted and options to use when mounting them.
/etc/mtab
List of filesystems that are currently mounted and the options with which they were mounted.
The umount command:umount [directory | device ]
umount /home/rich/mnt
Using the df command
The df command shows each mounted filesystem that contains data.
-h, --human-readable
Print sizes in a format friendly to human readers, usually as an M for megabytes or a G for gigabyte.
-m, --megabytes;-k, --kilobytes;
Using the du command
The du command shows the disk usage for a specific directory, it displays all of the files, directories, and subdirectories under the current directory, and it shows how many disk blocks each file or directory takes.
-s, --summarize Print only the grand total for each named directory.
-c, --total In addition to normal output, print grand total of all arguments.
-b, -m
-h, --human-readable(same as df)
-s, --summarize       display only a total for each argument
  -x, --one-file-system  skip directories on different file systems
  -X FILE, --exclude-from=FILE  Exclude files that match any pattern in FILE.
      --exclude=PATTERN  Exclude files that match PATTERN.
      --max-depth=N     print the total for a directory (or file, with --all)
                          only if it is N or fewer levels below the command
                          line argument;  --max-depth=0 is the same as
                          --summarize
      --time            show time of the last modification of any file in the
                          directory, or any of its subdirectories
      --time=WORD       show time as WORD instead of modification time:
                          atime, access, use, ctime or status
      --time-style=STYLE  show times using style STYLE:
                          full-iso, long-iso, iso, +FORMAT
                          FORMAT is interpreted like `date'
'du' - Finding the size of a directory
du -h /var/log
du -s /var/log
This displays a summary of the directory size. It is the simplest way to know the total size of directory.
$ du -S
Display the size of the current directory excluding the size of the subdirectories that exist within that directory.
$ df -h | grep /dev/hda1 | cut -c 41-43
du -ch | grep total
This would have only one line in its output that displays the total size of the current directory including all the subdirectories.
strings [options] files
Search each file specified and print any printable character strings found that are at least four.
strings /bin/ls
cpio flags [options]
Copy file archives in from or out to tape or disk, or to another location on the local machine. Each of the three flags -i, -o, or -p accepts different options.
-i, --extract [options] [patterns]
Copy in (extract) from an archive files whose names match selected patterns. Patterns should be quoted or escaped so they are interpreted by cpio, not by the shell. If pattern is omitted, all files are copied in. Existing files are not overwritten by older versions from the archive unless -u is specified.
-o, --create [options]
Copy out to an archive a list of files whose names are given on the standard input.
-p, --pass-through [options] directory
Copy (pass) files to another directory on the same system. Destination pathnames are interpreted relative to the named directory.
Comparison of valid options
Options available to the -i, -o, and -p flags are shown here.
i: bcdf mnrtsuv B SVCEHMR IF
o: 0a c vABL VC HM O F
p: 0a d lm uv L V R
-0, --null
Expect list of filenames to be terminated with null, not newline. This allows files with a newline in their names to be included.
-d, --make-directories Create directories as needed.
-f, --nonmatching
Reverse the sense of copying; copy all files except those that match patterns.
-O file Archive the output to file, which may be a file on another machine.
-t, --list
Print a table of contents of the input (create no files). When used with the -v option, resembles output of ls -l.
-u, --unconditional Unconditional copy; old files can overwrite new ones.
-v, --verbose Print a list of filenames processed.
Examples
ls | cpio -ov > directory.cpio
The `-o' option creates the archive, and the `-v' option prints the names of the files archived as they are added. Notice that the options can be put together after a single `-' or can be placed separately on the command line. The `>' redirects the cpio output to the file `directory.cpio'.
cpio -idv < tree.cpio
find . -print -depth | cpio -ov > tree.cpio
find . -depth -print0 | cpio --null -pvd new-dir
Some new options are the `-print0' available with GNU find, combined with the `--null' option of cpio. These two options act together to send file names between find and cpio, even if special characters are embedded in the file names. Another is `-p', which tells cpio to pass the files it finds to the directory `new-dir'.
find . -name "*.old" -print | cpio -ocBv > /dev/rst8
Generate a list of files whose names end in .old using find; use list as input to cpio.
cpio -icdv "*save*" < /dev/rst8
Restore from a tape drive all files whose names contain save (subdirectories are created if needed).
Move a directory tree: find . -depth -print | cpio -padm /mydir
In copy-pass mode, cpio copies files from one directory tree to another, combining the copy-out and copy-in steps without actually using an archive. It reads the list of files to copy from the standard input; the directory into which it will copy them is given as a non-option argument.
Open/Extract ISO files
mkdir /mnt/iso; mount -o loop *.iso /mnt/iso
top
top updates its display regularlyevery three seconds by default.
You can terminate processes by command name or by PID. When you terminate a process
by name, you save yourself the hassle of looking up the PID, but if there is more than one
process of the same name running, you will kill them all.

To kill by command name: $ killall xclock
If the process doesn't terminate within a few seconds, add the -KILL argument: $ killall -KILL xclock
kill -l shows the signal names and numbers.
starting a process with a lower (or higher) priority than normal?
The nice command starts a process with a lower-than-normal priority. The priority can be reduced by any value from 1 to 19 using the -n argument; without -n, the priority is reduced by a value of 10.
$ nice -n 15 xboard
To raise the priority of a process, you must be root; supply a negative priority adjustment
between 1 (slight boost in priority over normal) to 20 (highest priority):
# nice -n -12 xboard
changing the priority of an existing process
renice is the tool for this:
# renice 2 27365; # renice -5 27365
Adding and managing users from the command line
For users, there are useradd, usermod, and userdel; for groups, there are groupadd, groupmod, and groupdel.
The express way to add a user is to use useradd and then set the new user's password using passwd: # useradd jane; # passwd jane
usermod is used to adjust the parameters of existing accounts.
usermod -c "Jane Lee" jane
the userdel command deletes a user. The -r option specifies that the user's
home directory and mail spool (/var/spool/mail/ user) should also be removed:#userdel -r jane
groupadd groupname
The only option commonly used is -g, which lets you manually select the group ID (useful
if converting data from an old system):
# groupadd -g 781 groupname
# groupmod -g 947 groupname
# groupmod -n newname groupname
# groupdel groupname
Check All groups: cat /etc/group
We can directly modify this file to add user to group.
Add a existing user to existing group
Add existing user tony to ftp supplementary/secondary group with usermod command using -a option.
add the user to the supplemental group(s). Use only with -G option :
# usermod -a -G ftp tony
Change existing user tony primary group to www: # usermod -g www tony
Add a new user to primary group
To add a user tony to group developers use following command:
# useradd -g developers tony; id tony
Add a new user to secondary group
Use useradd command to add new users to existing group (or create a new group and then add user). If group does not exist, create it. Syntax:
useradd -G {group-name} username
Managing user passwords with passwd
passwd; passwd jane
The root user can also delete a password from an account (so a user can log in with just a
username): # passwd -d jane
To find out the password status of an account, use -S: # passwd -S jane
Managing groups and delegating group maintenance from the command line
The gpasswd command can be used to set a group password. This is rarely done. However,
it is also used to manage groups and, better yet, to delegate group administration to any user.
To specify the members of a group, use the -M option: # gpasswd -M jane,richard,frank audit
You can also add or delete individual group users using the -a and -d options:
# gpasswd -a audrey audit
# gpasswd -d frank audit
Delegation is performed with the -A (administrator) option: # gpasswd -A jane audit
Control Access to Files
Using group permissions
The group identity can be changed at any time using the newgrp command, and verified with the id command:newgrp audit;
id
The current group identity (also called real group ID) affects only the creation of files and directories; existing files and directories keep their existing group, and a user can access files accessible to any group to which she belongs.
chgrp modifies the group ownership of an existing file: chgrp audit report.txt
Using chgrp and newgrp is cumbersome. A much better solution is to use the SGID permission on directories, which automatically sets the group ownership
chgrp soccer game_scores
chmod u=rwx,g=rwxs,o= game_scores
ls -ld game_scores
drwxrws--- 2 richard soccer 4096 Oct 12 19:46 game_scores
Because the SGID permission is set, any file created in that directory is automatically owned
by the group soccer and can be accessed by other group membersexactly what is needed for collaboration within a group. The SGID permission is automatically applied to any directory created within games_scores, too.
Default permissions
This requested permission is limited by the current umask, which is an octal value representing the permissions that should not be granted to new files.
You can set the umask with the shell command of the same name: umask 037
umask by itself displays the current mask value.
changing a file's owner and group at the same time
The chown command permits you to specify a group after the username, separated by a colon.
chown barbara:smilies /tmp/input
Clean file content:    echo "" > file
System Admin
ulimit - get and set user limits
Syntax: ulimit [-SHacdfilmnpqstuvx] [limit](on ubuntu)
The ulimit utility allows you to limit almost everything in Linux except disk storage (for which purpose quota is used).
Seeing What Is Currently Set: ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
ulimit -H -a; ulimit -S -a
Setting a Value:
Setting a process limit:ulimit -u 30
Since neither -H nor -S were specified, the value set is both hard and soft.
Setting Soft Limits: [ulimit -S] ulimit -S -u 100
Setting Hard Limits: ulimit -H
A hard limit really is a rule. Once set, it cannot be exceeded.
The two ways to set the hard limit are to not specify anything ( ulimit –u 100 ), which effectively sets both the hard and soft limits, or use the –H parameter: ulimit –H –u 100 .
A bash forkbomb, any user with shell access to your box could take it down
$ :(){ :|:& };:
Miscellaneous Commands
* To set a value to unlimited, use the word itself: ulimit -u unlimited .
* To see only one value, specify that parameter. For example, to see the soft value of user processes, enter: ulimit -Su
* Default values are set in /etc/profile but can -- in some implementations -- also be derivatives of values set in /etc/initscript or /etc/security/limits.conf.
Changing the User Max File Descriptor Limit:
* add the following lines to /etc/security/limits.conf
* soft nofile 8192
* hard nofile 8192
* soft limit, is the default max value you get when you login
* hard limit, is the max value to which you can raise using the ulimit command.
sysctl - configure kernel parameters at runtime
sysctl is used to modify kernel parameters at runtime. The parameters available are those listed under /proc/sys/. If you wish to keep settings persistent across reboots you should edit /etc/sysctl.conf
Exploring sysctl variables: sysctl -a
Reducing swappiness: sysctl vm.swappiness=0

Resources:
http://www.oreillynet.com/linux/cmd/(Alphabetical Directory of Linux Commands)

Labels

adsense (5) Algorithm (69) Algorithm Series (35) Android (7) ANT (6) bat (8) Big Data (7) Blogger (14) Bugs (6) Cache (5) Chrome (19) Code Example (29) Code Quality (7) Coding Skills (5) Database (7) Debug (16) Design (5) Dev Tips (63) Eclipse (32) Git (5) Google (33) Guava (7) How to (9) Http Client (8) IDE (7) Interview (88) J2EE (13) J2SE (49) Java (186) JavaScript (27) JSON (7) Learning code (9) Lesson Learned (6) Linux (26) Lucene-Solr (112) Mac (10) Maven (8) Network (9) Nutch2 (18) Performance (9) PowerShell (11) Problem Solving (11) Programmer Skills (6) regex (5) Scala (6) Security (9) Soft Skills (38) Spring (22) System Design (11) Testing (7) Text Mining (14) Tips (17) Tools (24) Troubleshooting (29) UIMA (9) Web Development (19) Windows (21) xml (5)