Skip to content

Instantly share code, notes, and snippets.

@axecnarf87
Last active April 14, 2026 12:23
Show Gist options
  • Select an option

  • Save axecnarf87/781b6003146c0182d43194e2d2f87f13 to your computer and use it in GitHub Desktop.

Select an option

Save axecnarf87/781b6003146c0182d43194e2d2f87f13 to your computer and use it in GitHub Desktop.
Linux Foundations - LFS101x - Introduction to Linux Course - basic Command and some brief basic knowledge
Service - Program which runs as a background process
"initramfs" filesystem image contains programs and binary files that perform all actions needed to mount the proper root filesystem in RAM.
"init" program on the root filesystem (/sbin/init) is executed. Handles the mounting and pivoting over to the final real root filesystem. If special hardware drivers are needed before the mass storage can be accessed, they must be in the initramfs image.
The kernel runs the "/sbin/init" program. This then becomes the initial processes to get the system running.
"parted -l" to determine the type of partition table
"parted /dev/sda p" for listing the device node of the physical drive (/dev/sda), not a partition (/dev/sda1, /dev/sda2, etc. are partitions).
"gdisk -l /dev/sda" Interactive GUID partition table (GPT) manipulator
-The older startup system (SysVinit) viewed things as a serial process, divided into a series of sequential stages.
"systemd" recent startup system. Aggressive parallelization techniques, which permits multiple services to be initiated simultaneously
"sudo systemctl start/stop/restart fooservice" Starting, stopping, restarting a service (fooservice could be something like nfsd or the network) on a currently running system
"sudo systemctl enable/disable fooservice" Enabling or disabling a system service from starting up at system boot
Conventional disk filesystems: ext2, ext3, ext4, XFS, Btrfs, JFS, NTFS, etc.
Flash storage filesystems: ubifs, JFFS2, YAFFS, etc.
Database filesystems
Special purpose filesystems: procfs, sysfs, tmpfs, debugfs, etc.
Windows Linux
Partition Disk1 /dev/sda1
Filesystem type NTFS/VFAT EXT3/EXT4/XFS/BTRFS...
Mounting Parameters DriveLetter MountPoint
Base Folder where OS is stored C:\ /
Linux systems store their important files according to a standard layout called the "Filesystem Hierarchy Standard (FHS)"
"/run/media/yourusername/disklabel" Removable media such as USB drives and CDs and DVDs will show up as mounted
"/usr" where many distributions place not core-utilities needed for proper system operation (alias other program)
"cat" used to type out a file (or combine files)
"head" used to show the first few lines of a file
"tail" used to show the last few lines of a file
"man" used to view documentation.
Usually, the default command shell is "bash" (the GNU Bourne Again Shell).
The "command" is the name of the program you are executing. It may be followed by one or more "options" (or switches) that modify what the command may do. Options usually start with one or two dashes, for example, -p or --print, in order to differentiate them from "arguments", which represent what the command operates on.
Setting Up and Running sudo
1. "su" > admin password, to operate as Root user (superuser)
2. "/etc/sudoers.d/" directory cointaining configuration file to enable user accounts to use sudo. Create it with "echo "<username> ALL=(ALL) ALL" > /etc/sudoers.d/<username>"
3. "chmod 440 /etc/sudoers.d/<username>" changing permission to file user user write, group write, others none
"sudo systemctl stop gdm" stop GUI
"sudo systemctl start gdm" start GUI
"ssh username@remote-server.com" SSH (Secure Shell) would connect securely to the remote machine and give you a command line terminal window
"shutdown -h" shutdown and halt
"shutdown -r" shutdown and reboot
"sudo shutdown -h 10:00 "Shutting down for scheduled maintenance.""
"/bin" "/usr/bin" "/sbin" "/usr/sbin" "/opt" where are locate executable programs
"which <program>" to locate program
"whereis <program>" to locate program, source and man, in a broader range of system directories
"echo $HOME" "echo ~" print the exact path of the home directory (short-cut name is ~ (tilde))
"pwd" Displays the present working directory
"cd ~" "cd" Change to your home directory
"cd .." Change to parent directory
"cd -"" Change to previous directory
"tree" view of the filesystem tree "tree -d" to view just the directories and to suppress listing file names
"cd /" changes your current directory to the root (/) directory (or path you supply)
"ls" list the contents of the present working directory
"ls –a" list all files including hidden files and directories (those whose name start with . )
The "ln" utility is used to create hard links and (with the -s option) soft links, also known as symbolic links or symlinks.
"ln <existing file> <hard link file>" create an hard link
- "ls -li <existing file> <hard link file>" detailed list with inode number ("-i"), a unique quantity for each file object.
- If you remove either <existing file> and <hard link file> in the example, the inode object (and the remaining file name) will remain, which might be undesirable, as it may lead to subtle errors later if you recreate a file of that name.
- If you edit one of the files, exactly what happens depends on your editor; most editors, including vi and gedit, will retain the link by default, but it is possible that modifying one of the names may break the link and result in the creation of two objects.
"ln -s <existing file> <soft link file>" create a symbolic/soft link
- <soft link file> no longer appears to be a regular file, and it clearly points to <existing file> and has a different inode number.
- Symbolic links take no extra space on the filesystem (unless their names are very long).
- Unlike hard links, soft links can point to objects even on different filesystems (or partitions) which may or may not be currently available or even exist (in this cas is a dangling link)
"pushd <directory>" instead of cd; this pushes your starting directory onto a list.
"popd" will then send you back to those directories, walking in reverse order (the most recent directory will be the first one retrieved with popd).
"dirs" display the list of directories involved with pushd and popd
"cat" viewing files that are not very long; it does not provide any scroll-back.
"cat <file1> <file2>" Concatenate multiple files and display the output
"cat <file1> <file2> > <newfile>" Combine multiple files and save the output into a new file
"cat <file> >> <existingfile>" Append a file to the end of an existing file
"cat > <file>" Any subsequent lines typed will go into the file, until CTRL-D is typed
"cat >> <file>" Any subsequent lines are appended to the file, until CTRL-D is typed
"cat > <filename> << EOF
> <txt>
> <txt>
EOF"
"tac" As cat but prints the lines of a file in reverse order
"echo <string>" Displays (echoes) text [-e=enable special character sequences such as newline "\n" or horizontal tab "\t"]
"echo <string> > <newfile>" The specified string is placed in a new file
"echo <string> >> <existingfile>" The specified string is appended to the end of an already existing file
"echo $VARIABLE" The contents of the specified environment variable are displayed
"less" to view larger files because it is a paging program; it pauses at each screen full of text, provides scroll-back capabilities, and lets you search and navigate within the file. Note: Use "/" to search for a pattern in the forward direction and "?" for a pattern in the backward direction. (An older program named more is still used, but has fewer capabilities.)
"tail" to print the last 10 lines of a file by default. You can change the number of lines by doing "-n 15" or just "-15" if you wanted to look at the last 15 lines instead of the default.
"head" The opposite of tail; by default, it prints the first 10 lines of a file.
"touch" is often used to set or update the access, change, and modify times of files. By default, it resets a file's time stamp to match the current time.
"touch <namenewfile>" create a new file
"touch -t 03201600 <myfile>" sets the myfile file's time stamp to 4 p.m., March 20th (to put 18/03/2018 16:00, insert 1803181600).
"mkdir <namedirectory>" create a new directory
"rmdir" remove an empty rirectory
"rm -rf" Forcefully (without interection y?) remove a directory and all of its contents
"mv" Rename a file/directory
"rm" Remove a file
"rm –f" Forcefully (without interection y?) remove a file
"rm –i" Interactively remove a file
"cp <old position file> <new position file>" copy file
The "$PS1" variable is the character string that is displayed as the prompt on the command line. To set it "PS1="\u@\h \$ ""
When commands are executed, by default there are three standard file streams (or descriptors) always open for use: standard input (standard in or stdin), standard output (standard out or stdout) and standard error (or stderr)
Name symbolic name Value Example
standard input stdin 0 keyboard
standard output stdout 1 terminal
standard error stderr 2 log file
If we have a program called do_something that reads from stdin and writes to stdout and stderr:
"do_something < input-file" change its input source
"do_something > output-file" to send the output to a file, error messages will still be seen on the terminal windows
"do_something 2> error-file" to redirect stderr to a separate
"do_something >& all-output-file" to redirect stderr to the same output file of stdout
"command1 | command2 | command3" pipe "|" the output of one command or program into another as its input
"locate zip | grep bin" "locate" performs a search through a previously constructed database of files and directories on your system, matching all entries that contain a specified character string. "grep" will print only the lines that contain one or more specified strings
"updatedb" udpade database used by "locate"
WILDCARDS
"?" matches any single character
"*" matches any string of characters
"[set]" matches any character in the set of characters, for example [adf] will match any occurrence of "a", "d", or "f"
"[!set]" matches any character not in the set of characters
"core files" contain diagnostic information after a program fails
"/tmp" temporary directories
"find" it recurses down the filesystem tree from any particular directory (or set of directories) and locates files that match specified conditions. The default pathname is always the present working directory
"-name" only list files with a certain pattern in their name
"-iname" like -name, but the match is case insensitive
"-type" will restrict the results to files of a certain specified type. "d" for directory, "l" for symbolic link, or "f" for a regular file, etc
"-exec" to run commands on the files that match your search criteria
"-ok" as exec, but with permission prompt
"find -name "*.swp" -exec rm {} ’;’ " to find and remove all files that end with .swp
{} (squiggly brackets) is a place holder that will be filled with all the file names that result from the find expression, and the preceding command will be run on each one individually.
you have to end the command with either ‘;’ (including the single-quotes) or "\;". Both forms are fine.
"-ctime <number>" when the inode metadata (i.e., file ownership, permissions, etc.) last changed
"-atime <number>" accessed/last read
"-mtime <number>" modified/last written
The number is the number of days and can be expressed as either a number (n) that means exactly that value, +n, which means greater than that number, or -n, which means less than that number.
"-cmin<number>"
"-amin <number>"
"-mmin <number>"
"-size <number>" search by size. 512-byte blocks, by default.
bytes (c), kilobytes (k), megabytes (M), gigabytes (G)
file sizes can also be exact numbers (n), +n or -n
"find / -size +10M -exec <command> {} ’;’" to find files greater than 10 MB in size and running a command on those files
"rpm -i foo.rpm" install package
"yum install foo" install package, dependencies
"rpm -e foo.rpm" remove package
"yum remove foo" remove package, dependencies
"rpm -U foo.rpm" update package
"yum update foo" update package, dependencies
"system yum update" Update entire
"rpm -qa" or "yum list installed" show all installed packages
"rpm -qil foo" get information on package
"yum list "foo"" show packages named foo
"yum list" show all available packages
"rpm -qf file" what package is file part of?
"man –f <program>" generates the same result as typing "whatis".
"man –k <program>" generates the same result as typing "apropos" (search the manual page names and descriptions).
The man pages are divided into nine numbered chapters (1 through 9)
"man 3 <program>" to display the page from a particular chapter
"man -a <program>" display all pages with the given name in all chapters, one after the other.
"info <topic name>" GNU project's standard documentation format
"n" go to the next node
"p" go to the previous node
"u" move one node up in the index
"<command> --help" or "<command> -h" show most commands short description
"help <command>" same as --help for built-in commands (built into a shell interpreter such as sh, ksh, bash, dash, csh etc: These commands will always available in RAM so that accessing them is bit fast when compared to external commands which are stored on hard disk.)
"/usr/share/doc" documentation directly pulled from the upstream source code, can also contain information about how the distribution packaged and set up the software
PROCESS
A "process" is simply an instance of one or more related tasks (threads) executing on your computer. It is not the same as a program or a command; a single program may actually start several processes simultaneously.
A critical kernel function called the "scheduler" constantly shifts processes on and off the CPU, sharing time according to relative priority, how much time is needed and how much has already been granted to a task.
- "running state", it means it is either currently executing instructions on a CPU, or is waiting to be granted a share of time (a time slice) so it can execute
- "sleep state", generally when they are waiting for something to happen before they can resume. is sitting on a "wait" queue.
- "zombie state" when a child process completes, but its parent process has not asked about its state; it is not really alive, but still shows up in the system's list of processes
"runaway process" a process in a non-responding state
Process Type
Description
Example
-Interactive Processes
-Need to be started by a user, either at a command line or through a graphical interface such as an icon or a menu selection.
-bash, firefox, top
-Batch Processes
-Automatic processes which are scheduled from and then disconnected from the terminal. These tasks are queued and work on a FIFO (First In, First Out) basis.
-updatedb
-Daemons
-Server processes that run continuously. Many are launched during system startup and then wait for a user or system request indicating that their service is required.
-httpd, xinetd, sshd
-Threads
-Lightweight processes. These are tasks that run under the umbrella of a main process, sharing memory and other resources, but are scheduled and run by the system on an individual basis. An individual thread can end without terminating the whole process and a process can create new threads at any time. Many non-trivial programs are multi-threaded.
-firefox, gnome-terminal-server
-Kernel Threads
-Kernel tasks that users neither start nor terminate and have little control over. These may perform actions like moving a thread from one CPU to another, or making sure input/output operations to disk are completed.
-kthreadd, migration, ksoftirqd
ID Type Description
Process ID (PID) Unique Process ID number
Parent Process ID (PPID) Process (Parent) that started this process. If the parent dies, the PPID will refer to an adoptive parent; on recent kernels, this is kthreadd which has PPID=2.
Thread ID (TID) Thread ID number. This is the same as the PID for single-threaded processes. For a multi-threaded process, each thread shares the same PID, but has a unique TID.
"kill -SIGKILL <pid>" or "kill -9 <pid>" to terminate a process.
"Real User ID (RUID)" user who starts the process
"Effective UID (EUID)" user who determines the access rights for the users.
"Real Group ID (RGID)"
"Effective Group ID (EGID)"
The "priority" for a process can be set by specifying a "nice value", or "niceness". The nicer the process, the lower the priority. A nice value of -20 represents the highest priority and 19 represents the lowest.
"real-time priority" to time-sensitive tasks. This is a very high priority.
"hard real time" making sure a job gets completed within a very well-defined time window.
"Load average" is the average of the load number for a given period of time. It takes into account processes that are:
-Actively running on a CPU.
-Considered runnable, but waiting for a CPU to become available.
-Sleeping: i.e., waiting for some kind of resource (typically, I/O) to become available.
The load average is displayed using three different sets of numbers: percentage for the last minute, for the last 5 minutes, for the last 15 minutes. (0.45 means 45%, 4.5 means 450%)
"w" to shows load average
"uptime" to shows load average
"top" to shows load average
-the first line, display how long the system has been up, how many users are logged on, what is the load average
-the second line of the top output displays the total number of processes, the number of running, sleeping, stopped, and zombie processes.
-third line shows how the CPU time is being divided between the users (us) and the kernel (sy) by displaying the percentage of CPU time used for each. (ni) niceness the percentage of user jobs running at a lower priority. Idle mode (id) should be low if the load average is high, and vice versa. The percentage of jobs waiting (wa) for I/O is listed. Interrupts include the percentage of hardware (hi) vs. software interrupts (si). Steal time (st) is generally used with virtual machines, which has some of its idle CPU time taken for other uses.
-line 4, Physical memory (RAM)
-line 5, Swap space (temporary storage space on the hard drive). Once the physical memory is exhausted, the system starts using swap space as an extended memory pool, and since accessing disk is much slower than accessing memory, this will negatively affect system performance.
Both the last teo categories display total memory, used memory, and free space.
-Each line in the process list of the top output displays information about a process. By default, processes are ordered by highest CPU usage. The following information about each process is displayed:
-Process Identification Number (PID)
-Process owner (USER)
-Priority (PR) and nice values (NI)
-Virtual (VIRT), physical (RES), and shared memory (SHR)
-Status (S)
-Percentage of CPU (%CPU) and memory (%MEM) used
-Execution time (TIME+)
-Command (COMMAND).
The table lists what happens when pressing various keys when running top:
Command Output
"t" Display or hide summary information (rows 2 and 3)
"m" Display or hide memory information (rows 4 and 5)
"A" Sort the process list by top resource consumers
"r" Renice (change the priority of) a specific processes
"k" Kill a specific process
"f" Enter the top configuration screen
"o" Interactively select a new sort order in the process list
"<command> &" suffixing & to put a job in the background
The background jobs are connected to the terminal window, so, if you log off, the jobs utility will not show the ones started from that window
"CTRL-Z" to suspend a foreground job
"CTRL-C" to terminate a foreground job
"bg <n.process>" to run a process in the background
"fg <n.process>" to run a process in the foreground
"jobs -l" displays all jobs running in the bg, including the PID
"ps" provides information about currently running processes keyed by PID
If you want a repetitive update of this status, you can use top or other commonly installed variants, such as htop or atop, from the command line, or invoke your distribution's graphical system monitor application
"ps -u" to display information of processes for a specified username
"ps -ef" displays all the processes in the system in full detail
"ps -eLf" displays one line of information for every thread
BSD style (style of option specification, withoud preceding dashes):
"ps aux" displays all processes of all users
"ps axo <attribute>,<attribute>,<attribute>" allows you to specify which attributes you want to view. es "ps axo stat,priority,pid,pcpu"
"pstree" displays the processes running on the system in the form of a tree diagram. Repeated entries of a process are not displayed, and threads are displayed in curly braces
SCHEDULING FUTURE JOBS
"at now + <time> <time spec>""<command>" + enter > ctrl + "d"
eg. $ tty
/dev/pts/1
$ at now + 1 min
at> echo 'yo bro' > /dev/pts/1
at> <EOT>
$ at 10 am tomorrow
$ at 11:00 next month
$ at 22:00 today
$ at now + 1 week
$ at noon
"cron" is a time-based scheduling utility program. It can launch routine background jobs at specific times and/or days on an on-going basis. cron is driven by a configuration file called "/etc/crontab" (cron table)
"crontab -e" will open the crontab editor to edit existing jobs or to create new jobs.
Each line of the crontab file will contain 6 fields:
Field Description Values
MIN Minutes 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6 (0 = Sunday)
CMD Command Any command to be executed
"* * * * * /usr/local/bin/execute/this/script.sh" will schedule a job to execute 'script.sh' every minute of every hour of every day of the month, and every month and every day in the week.
"30 08 10 06 * /home/sysadmin/full-backup" will schedule a full-backup at 8.30am, 10-June, irrespective of the day of the week
"sleep <number>[suffix]..." suspends execution for at least the specified period of time
where SUFFIX may be:
1. s for seconds (the default)
2. m for minutes
3. h for hours
4. d for days.
"atq" jobs queued up to run
"atrm <number>" remove/delete a planned job
"batch" will prompt for command input, which will be executed when the system load average is less than 1.5
User enviroment
Account, User, Group
"whoami" Identify current user
"who" List currently logged on users
"who -a" List currently logged on users; with more details
"w"
"/etc/group" File contains basic group attributes
"/etc/passwd" File is used to keep track of every registered user that has access to a system
"id" <> Show info about user. With no argument about the current user.
"sudo useradd" <> Add a new user
"sudo userdel" <> Remove a user
"sudo userdel -r" <> Remove a user and its home directory
"sudo groupadd" <> Add a new group
"sudo groupdel" <> Remove a group
"groups" <> Show what groups the user belongs to
"sudo usermod -a -G <group> <user>" Adding a user to an existing group [-a = append and avoid to remove already existing group]
"sudo groupmod [-option] <newoption> <old option>" Change group properties; -g = gid, -n = name, -p = password
"sudo usermod -G <user> <user>" Removing a user from a group. It's going to remain in its own group (1^ user)
"sudo" <command> To execute a single command with root privilege
"/etc/sudoers" sudo configuration files
"su" Substitute user. Elevating to root account; followed by <user name> to became the relative user
- in "/etc" there are global settings for all the user (STARTUP FILES)
- in user's home directory just can override the global settings
*customise the prompt
*define command-line shurtcuts and aliases
*setting the default text editor
*setting the path to where to find executable program
1 - "/etc/profile"
The Linux login shell evaluates whatever startup find first in:
a - "~/.bash_profile"
b - "~/.bash_login"
c - "~/.profile"
"~/.bashrc" it's the only checked when you don't perform a full system login, but create only a new window or shell
"alias <namecommand>='command'" Customise command or modify behaviour already existing ones. Without argument, just list currently defined aliases
To make it persistent, modify the ~/.bashrc file (add the alias after fi)
and execute it ". ~/.bashrc".
"unalias <alias>" To remove an alias; to make it persistent modify the ~/.bashrc file
"set" List ENVIROMENTS VARIABLES. Usually this method print out more results
"env" List ENVIROMENTS VARIABLES
"export" List ENVIROMENTS VARIABLES
"echo $<VARIABLE>" Show the value of a specific variable
"export VARIABLE=value" or "VARIABLE=value; export VARIABLE" Export a new variable
add "export VARIABLE=value" to "~/bashrc" to make it permanent. Then execute it ". ~/.bashrc"
"pwd" Print (present) working directory
"$HOME" Enviroments variable: user's home, as ~
"$PATH" Enviroments variable: tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user.
Usually /home/me/bin:/usr/local/bin:/usr/bin:/bin/usr
directory are separate by ":"
"OLDPATH=$PATH" just to be sure to don't delate as mistake
"PATH=$PATH:</pathdirectory>"
"$PS1" used to customize your prompt string in your terminal window "\u" = User name, "\h" = Host name, "\w" = Current working directory, "\!" = History number of this command, "\d" = Date
"export PS1='\u@\h:\w$'" To set it to "me@example.com:~$"
"$SHELL" Full pathname to the shell
"history" To show your history command buffer ("~/.bash_history")
Associated environment variables:
"HISTFILE" The location of the history file
"HISTFILESIZE" The maximum number of lines in the history file (default 500)
"HISTSIZE" The maximum number of commands in the history file.
"HISTCONTROL" How commands are stored.
"HISTIGNORE" Which command lines can be unsaved.
"!" to enter a previously used commands
"!$" refer to the last argument in a line
"!<number>" refer to the number command line in history
"!<string>" refer to the most recent command starting with string
Up/Down arrow keys - Browse through the list of commands previously executed
"!!" (Pronounced as bang-bang) Execute the previous command
CTRL-R - Search previously used commands
CTRL-L - Clears the screen
CTRL-D - Exits the current shell
CTRL-Z - Puts the current process into suspended background
CTRL-C - Kills the current process
CTRL-H - Works the same as backspace
CTRL-A - Goes to the beginning of the line
CTRL-W - Deletes the word before the cursor
CTRL-U - Deletes from beginning of line to cursor position
CTRL-E - Goes to the end of the line
Tab Auto-completes files, directories, and binaries
"ls -l" <namefile> List file plus permission
KIND OF PERMISSION --> rwx (read, write, execute)
GROUPS OF OWNER --> u:user, g:group, o:others
"chmod <group of owners>[+/-]<kind of permission>,<group of owners>[+/-]<kind of permission> <namefile>" Change the permissions on the file
example
"chmod uo+x,g-w <namefile>"
4=read, 2=write, 1=execute, 7=read/write/execute, 6=read/write, 5=read/execute
"chmod [shorthand u][shorthand g][shorthand o] <namefile>" example: chmod 755 somefile
"sudo chown <owner> <namefile>" Change user ownership of a file or directory
"sudo chown <owner>:<group> <namefile>" Change user ownership and group of a file or directory
"sudo chgrp <group> <namefile>" Change group ownership
"zcat compressed-file.txt.gz" To view a compressed file
"zless <filename>.gz" or "zmore <filename>.gz" To page through a compressed file
"zgrep -i less test-file.txt.gz" To search inside a compressed file
"zdiff filename1.txt.gz filename2.txt.gz" To compare two compressed files
sed (stream editor), is a powerful text processing tool. Can filter text, perform substitutions in data streams, working like a churn-mill.
"sed -e <command> <filename>" Specify editing commands at the command line, operate on file and put the output on standard out (e.g., the terminal)
"sed -f <scriptfile> <filename>" Specify a scriptfile containing sed commands, operate on file and put output on standard out.
The -e command option allows you to specify multiple editing commands simultaneously at the command line. It is unnecessary if you only have one operation invoked.
pattern=current string
replace_string=new string
"sed s/pattern/replace_string/ file" Substitute first string occurrence in a line
"sed s/pattern/replace_string/g file" Substitute all string occurrences in a line
"sed 1,3s/pattern/replace_string/g file" Substitute all string occurrences in a range of lines
"sed -i s/pattern/replace_string/g file" Save changes for string substitution in the same file
"sed s/pattern/replace_string/g file1 > file2" Replace all occurrences of pattern with replace_string in file1 and move the contents to file2. If you approve you can then overwrite the original file with "mv file2 file1"
Are the same:
"cat some_file | sed -e s/dog/pig/g"
"sed -e s:dog:pig:g some_file"
"sed -e s/dog/pig/g some_file"
awk is used to extract and then print specific contents of a file and is often used to construct reports. Powerful utility and interpreted programming language, manipulate data files, retrieving, and processing text. Works well with field and records
"awk ‘command’ var=value file" Specify a command directly at the command line
"awk -f scriptfile var=value file" Specify a file that contains the script to be executed along with f
"awk '{ print $0 }' /etc/passwd" Print entire file
"awk -F: '{ print $1 }' /etc/passwd" Print first field (column) of every line, separated by a space. The "-F" option allows you to specify a particular field separator character
"awk -F: '{ print $1 $7 }' /etc/passwd" Print first and seventh field of every line
"sort <filename>" Sort the lines in the specified file, according to the characters at the beginning of each line
"cat file1 file2 | sort" Combine the two files, then sort the lines and display the output on the terminal
"sort -r <filename>" Sort the lines in reverse order
"sort -k 3 <filename>" Sort the lines by the 3rd field on each line instead of the beginning {My experience is that put the number n line at the botton}
"-u" sort checks for unique values after sorting the records (lines), as uniq
}
"uniq" Remove duplicate lines in a text file and is useful for simplifying the text display
"uniq -c filename" count the number of duplicate entries
"paste <file1> <file2>" Create a single file containing all columns. The different columns are identified based on delimiters (spacing used to separate two fields).
"-d" delimiters (space, tab,|, comma), which specify a list of delimiters to be used instead of tabs for separating consecutive values on a single line. Each delimiter is used in turn; when the list has been exhausted, paste begins again at the first delimiter. "paste -d ':' <file1> <file2>"
"-s", which causes paste to append the data in series rather than in parallel; that is, in a horizontal rather than vertical fashion.
"join <file1> <file2>" It first checks whether the files share common fields, such as names or phone numbers, and then joins the lines in two files based on a common field.
"split <originalfile> <newfilesname> [prefix]" to break up (or split) a file into equal-sized segments for easier viewing and manipulation, and is generally used only on relatively large files. By default, split breaks up a file into 1,000-line segments. The original file remains unchanged, and a set of new files with the same name plus an added prefix is created. By default, the x prefix is added, otherwise please specify.
REGULAR EXPRESSION
Search Patterns Usage
"."(dot) Match any single character
"a|z" Match a or z
"$" Match end of string
"^" Match start of string
"*" Match preceding item 0 or more times
example. the quick brown fox jumped over the lazy dog
Command Usage
a.. azy
b.|j. br and ju
..$ og
l.* lazy dog
l.*y lazy
the.* the whole sentence
"grep [pattern] <filename>" Search for a pattern in a file and print all matching lines. With option "-n" show the line number.
"grep -v [pattern] <filename>" Print all lines that do not match the pattern
"grep [0-9] <filename>" Print the lines that contain the numbers 0 through 9
"grep -C 3 [pattern] <filename>" Print context of lines (specified number of lines above and below the pattern) for matching the pattern. Here, the number of lines is specified as 3.
"grep -e [pattern] -e [pattern] <file>" search for a result or another
"strings" is used to extract all printable character strings found in the file or files given as arguments. It is useful in locating human-readable content embedded in binary files; for text files one can just use grep.
"strings <file> | grep <text>"
"tr [options] set1 [set2]" translate specified characters into other characters or to delete them.
"tr abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ" or "tr a-z A-Z" Convert lower case to upper case
"tr '{}' '()' < inputfile > outputfile" Translate braces into parenthesis
"echo "This is for testing" | tr [:space:] '\t'" Translate white-space to tabs
"echo "This is for testing" | tr -s [:space:]" queeze repetition of characters using -s
"echo "the geek stuff" | tr -d 't'" Delete specified characters using -d option
"echo "my username is 432234" | tr -cd [:digit:]" Complement the sets using -c option
"tr -cd [:print:] < file.txt" Remove all non-printable character from a file
"tr -s '\n' ' ' < file.txt" Join all the lines in a file into a single line
[:space:] SPACE
'\t' TAB
'\n' JOIN ALL LINES ON A SINGLE ONE
[:digit:] NUMBER
[:print:] PRINTABLE CHARACTERS
-s SQUEEZE CHARACTERS REPETITION
-d DELETE CHARACTERS
-c COMPLEMENT THE SETS. ex -cd DELETE OTHERS CHARACTERS
"tee" takes the output from any command, and, while sending it to standard output, it also saves it to a file. example "ls -l | tee newfile"
wc (word count) counts the number of lines, words, and characters in a file or list of files. By default all options are active (-l=lines, -c=bytes, -w=words)
"cut -d<delimiter> -f<n.column>" extract specific columns from a column-based file. The default column separator is the tab character; a different delimiter can be given as a command option.
NETWORK
Devices attached to a network must have at least one unique network address identifier known as the IP (Internet Protocol) address. The address is essential for routing packets (data buffers together with headers) of information through the network.
-IPv4 uses 32-bits for addresses (divided into four 8-bit sections called octets/byte); there are only 4.3 billion unique addresses available.
Class A the first octet of an IP address as their Net ID and use the other three octets as the Host ID
can have up to 16.7 million unique hosts on its network. The range of host address is from 1.0.0.0 to 127.255.255.255.
Class B addresses use the first two octets of the IP address as their Net ID and the last two octets as the Host ID
can support a maximum of 65,536 unique hosts on its network. The range of host address is from 128.0.0.0 to 191.255.255.255
Class C addresses use the first three octets of the IP address as their Net ID and the last octet as their Host ID
can support up to 256 (8-bits) unique hosts. The range of host address is from 192.0.0.0 to 223.255.255.255.
-IPv6 uses 128-bits for addresses; this allows for 3.4 X 1038 unique addresses
IP addresses are requested from your Internet Service Provider (ISP) by your organization's network administrator
When you assign IP addresses manually, you add static (never changing) addresses to the network. When you assign IP addresses dynamically (they can change every time you reboot or even more often), the Dynamic Host Configuration Protocol (DHCP) is used to assign IP addresses.
Name Resolution is used to convert numerical IP address values into a human-readable format known as the hostname.
"localhost" / "127.0.0.1" describes the machine you are currently on (which normally has additional network-related IP addresses).
"hostname" show your hostname
"/etc/hosts" SOURCE (IP) DOMAIN (hostname) list HOSTNAMES
"/etc/sysconfig/network" where are located Network configuration files for FEDORA.
"/etc/resolv.conf" where is DNS
"host <website>" look up hostnames using DNS. Display IP address and specified hostname.
"nslookup linuxfoundation.org" look up name servers interactively (server and address depending from default gatway). With no argument you can put more than one and exit with ctrl+c
"dig linuxfoundation.org" look up domain name informatiion from nameserver
"sudo hostname <new name>" or "hostnamectl set-hostname <new hostname>" change hostname
"nmcli" network manager command line interface
"nmtui" network manager utility
Network interfaces are a connection channel between a device and a network. Physically, network interfaces can proceed through a network interface card (NIC), or can be more abstractly implemented as software.
"ifconfig <network name>" "ifconfig" Info about all the network interfaces or a particular one ("/sbin/ifconfig").
"ip" (/sbin/ip) similar information to ipconfig
example "ip -s link show <network interface>"
"ip address show <network interface>"
"ip addr show" to view IP address
"ip route show" to view the routing information
"ping <hostname/IP>" check whether or not a machine attached to the network can receive and send data it confirms that the remote host is online and is responding
"route" show/manipulate Kernel IP routing table
A network requires the connection of many nodes. Data moves from source to destination by passing through a series of routers and potentially across multiple networks. Servers maintain routing tables containing the addresses of each node in the network. The IP Routing protocols enable routers to build up a forwarding table that correlates final destinations with the next hop addresses.
"route –n" or "ip route" Show current routing table
"route add -net address" or "ip route add" Add static route
route del -net address" or "ip route del" Delete static route
"traceroute <address>" is used to inspect the route which the data packet takes to reach the destination host, which makes it quite useful for troubleshooting network delays and errors.
By using traceroute, you can isolate connectivity issues between hops, which helps resolve them faster.
A hop that outputs * * * means that the router at that hop doesn't respond to the type of packet you were using for the traceroute (by default it's UDP on Unix-like and ICMP on Windows).
"sudo ethtool <name network>" queries network interfaces and can also set various parameters such as the speed.
"netstat -r" displays all active connections and routing tables. Useful for monitoring performance and troubleshooting.
"nmap -sP <IP port>" scans open ports on a network. Important for security analysis
"tcpdump" Dumps network traffic for analysis.
"iptraf or "iptraf-ng -g" Monitors network traffic in text mode.
"mtr" Combines functionality of ping and traceroute and gives a continuously updated display.
"dig" Tests DNS workings. A good replacement for host and nslookup.
Non-Graphical Browsers Description
lynx Configurable text-based web browser; the earliest such browser and still in use.
links or elinks Based on lynx. It can display tables and frames.
w3m Another text-based web browser with many features.
"wget <url>" is a command line utility that can capably handle the following types of downloads:
Large file downloads
Recursive downloads, where a web page refers to other web pages and all are downloaded at once
Password-required downloads
Multiple file downloads.
"curl <url>" obtain information about a url and save the content of a page as does wget
"curl -o <file> <url>"
File Transfer Protocol (FTP) is a well-known and popular method for transferring files between computers using the Internet. This method is built on a client-server model. FTP can be used within a browser or with stand-alone client programs.
"ftp <server>" connecting to a ftp server
"get <name file>" receive file form the FTP server
sftp
ncftp
yafc (Yet Another FTP Client).
ftp.kenel.org has recently been slated for removal in favor of use of rsync and web browser https access for example. As an alternative, sftp is a very secure mode of connection, which uses the Secure Shell (ssh) protocol, which we will discuss shortly.
"ftp -p <website>" to connect to
"ls" to see the file
"get <file>" to download the file
"quit" to exit
"ssh <some_system>" to login to Secure Shell (SSH), a cryptographic network protocol used for secure data communication.
"ssh -l someone some_system" or "ssh someone@some_system" run as another user
"ssh some_system my_command" to run a command on a remote system
"scp <localfile> <user@remotesystem>:/home/user/" Secure Copy (scp) between two networked hosts. scp uses the SSH protocol for transferring data.
Network Troubleshooting
1)"student:/tmp> /sbin/ifconfig"
"student:/tmp> ip addr show" for IP address
1.a)if it does not show a IP address, start/restart Network Manager, with one of the follow command:
"student:/tmp> sudo systemctl restart NetworkManager"
"student:/tmp> sudo systemctl restart network"
"student:/tmp> sudo service NetworkManager restart"
"student:/tmp> sudo service network restart"
1.b)If your device was up but had no IP address, the above should have helped fix it, but you can try to get a fresh address with:
"student:/tmp> sudo dhclient <eth0>" name of right ethernet device
2)If your interface is up and running with an assigned IP address and you still can not reach google.com, we should make sure you have a valid hostname assigned to your machine, with hostname:
"student:/tmp> hostname"
3)See if the site is assigned to a IP address (by DNS) and is up and reachable with ping
"sudo ping -c 3 google.com"
"-c 3" to limit to 3 packets
If the result was: ping: "unknown host google.com" It is likely that something is wrong with your DNS set-up.
Same if it send the same ip you host machine is in.
"host <google.com>"
"host <hostname> <8.8.8.8>"
8.8.8.8 is a DNS server provided by Google. It could be a trick to use a public server, if your network DNS is ill. you can also enter it in /etc/resolv.conf
"dig google.com"
4)If host and dig fail to connect the name to an ip address:
-The DNS server is down, try to ping it to see if it is alive (ip address should be in /etc/resolv.conf)
-may be up, but not connect to the machine
-the route to the DNS may not be correct
"sudo traceroute 8.8.8.8" if give only the first line, likely the default route is wrong
try "ip route show" if it's blank or point to your own machine, you need to add a proper default router and run the same test we just did.
(mtr is an enhanced version of traceroute) "sudo mtr --report-cycles 3 8.8.8.8"
BASH SHELL SCRIPTING
"#!/bin/bash" the first line of the script, that starts with "#!" contains the full path command interpeter (in this case /bin/bash). What is available on the system is listed in /etc/shells, typically:
/bin/sh
/bin/bash
/bin/tcsh
/bin/csh
/bin/ksh
"chmod +x <file>" to make the file executable to all users
"./<file>" to run it
"bash <file>" to run it, even if it is not executable
"#" (hash-tag, pound-sign, number-sign) used to start comments in the script.
"read <variable>" read text in video and put it in a variable
"echo :$<variable>" print the variable
"exit" return value, by default success is 0 and stored in "$?".
"#" Used to add a comment, except when used as \#, or as #! when starting a script
"\" Used at the end of a line to indicate continuation on to the next line
";" Used to interpret what follows as a new command to be executed next
"$" Indicates what follows is an environment variable
">" Redirect output
">>" Append output
"<" Redirect input
"|" Used to pipe the result into the next command
"\" (backslash character) concatenation operator used to split long commands over multiple lines.
"scp abc@server1.linux.com:/var/ftp/pub/userdata/custdata/read \
abc@server3.linux.co.in:/opt/oradba/master/abc/"
to copy the file /var/ftp/pub/userdata/custdata/read from server1.linux.com, to the /opt/oradba/master/abc directory on server3.linux.co.in.
Chaining of commands
";" execute command sequentially
"&&" abort subsequent commands when an earlier one fails
"||" proceed until something succeeds and then you stop executing any further steps
">" to write a output to a file (The process of diverting the output to a file is called output redirection)
">>" will append output to a file if it exists.
"<" input redirection "wc < <file>" is the same as "wc <file>"
Shell scripts execute sequences of commands and other types of statements.
- Compiled applications (binary executable files, generally residing on the filesystem in well-known directories such as "/usr/bin". Shell scripts always have access to these applications such as rm, ls, df, vi, and gzip, which are programs compiled from lower level programming languages such as C.)
- Built-in bash commands (can only be used to display the output within a terminal shell or shell script. Sometimes, these commands have the same name as executable programs on the system, such as echo which can lead to subtle problems. bash built-in commands include and cd, pwd, echo, read, logout, printf, let, and ulimit. Thus, slightly different behavior can be expected from the built-in version of a command such as echo as compared to "/bin/echo")
A complete list of bash built-in commands can be found in the bash man page, or by simply typing help.
- Shell scripts or scripts from other interpreted languages, such as perl and Python.
Script Parameter, according to them (command arguments passed: number or strings) scripts will take different paths or arrive at different values.
"$ ./script.sh /tmp"
"$ ./script.sh 100 200"
"$0" Script name
"$1" First parameter
"$2, $3, etc." Second, third parameter, etc.
"$*" All parameters
"$#" Number of arguments
"$(<command>)" Command Substitution, substitute the result of a command as a portion of another command
example "ls /lib/modules/$(uname -r)" uname -r result in the kernel version and give the destination folder to ls.
By default, the variables created within a script are available only to the subsequent steps of that script. Any child processes (sub-shells) do not have automatic access to the values of these variables. To make them available to child processes, they must be promoted to environment variables using the export statement "export VAR=<value>" "VAR=value ; export VAR"
A function is a code block that implements a set of operations. Functions are also often called subroutines.
"function_name (){
command...
}"
functionsshowmess() {
echo I colori primari sono: $1
}
functionsshowmess blu <--- richiama la funzione con parameter blu
functionsshowmess rosso <--- richiama la funzione con parameter
functionsshowmess giallo <--- richiama la funzione con parameter giallo
"if [condition]
then
statements
else
statements
fi"
"if [[ $op == a ]] ; then add $arg1 $arg2
elif [[ $op == s ]] ; then sub $arg1 $arg2
elif [[ $op == m ]] ; then mult $arg1 $arg2
elif [[ $op == d ]] ; then div $arg1 $arg2
else
echo $op is not a, s, m, or d, aborting ; exit 2
fi"
( EXPRESSION ) - EXPRESSION is true
! EXPRESSION - EXPRESSION is false
EXPRESSION1 -a EXPRESSION2 - both EXPRESSION1 and EXPRESSION2 are true
EXPRESSION1 -o EXPRESSION2 - either EXPRESSION1 or EXPRESSION2 is true
-n STRING - the length of STRING is nonzero
STRING - equivalent to -n STRING
-z STRING - the length of STRING is zero
STRING1 = STRING2 - the strings are equal
STRING1 != STRING2 - the strings are not equal
INTEGER1 -eq INTEGER2 - INTEGER1 is equal to INTEGER2
INTEGER1 -ge INTEGER2 - INTEGER1 is greater than or equal to INTEGER2
INTEGER1 -gt INTEGER2 - INTEGER1 is greater than INTEGER2
INTEGER1 -le INTEGER2 - INTEGER1 is less than or equal to INTEGER2
INTEGER1 -lt INTEGER2 - INTEGER1 is less than INTEGER2
INTEGER1 -ne INTEGER2 - INTEGER1 is not equal to INTEGER2
FILE1 -ef FILE2 - FILE1 and FILE2 have the same device and inode numbers
FILE1 -nt FILE2 - FILE1 is newer (modification date) than FILE2
FILE1 -ot FILE2 - FILE1 is older than FILE2
-b FILE - FILE exists and is block special
-c FILE - FILE exists and is character special
-d FILE - FILE exists and is a directory
-e FILE - FILE exists
-f FILE - FILE exists and is a regular file
-g FILE - FILE exists and is set-group-ID
-G FILE - FILE exists and is owned by the effective group ID
-h FILE - FILE exists and is a symbolic link (same as -L)
-k FILE - FILE exists and has its sticky bit set
-L FILE - FILE exists and is a symbolic link (same as -h)
-O FILE - FILE exists and is owned by the effective user ID
-p FILE - FILE exists and is a named pipe
-r FILE - FILE exists and read permission is granted
-s FILE - FILE exists and has a size greater than zero
-S FILE - FILE exists and is a socket
-t FD - file descriptor FD is opened on a terminal
-u FILE - FILE exists and its set-user-ID bit is set
-w FILE - FILE exists and write permission is granted
-x FILE - FILE exists and execute (or search) permission is granted
"man 1 test" full list of file conditions using the command
Operator Operation Meaning
&& AND The action will be performed only if both the conditions evaluate to true.
|| OR The action will be performed if any one of the conditions evaluate to true.
! NOT The action will be performed only if the condition evaluates to false.
To compare two number
[num1 -op num2]
Operator Meaning
-eq Equal to
-ne Not equal to
-gt Greater than
-lt Less than
Using the expr utility: expr is a standard but somewhat deprecated program. The syntax is as follows:
expr 8 + 8
echo $(expr 8 + 8)
Using the $((...)) syntax: This is the built-in shell format. The syntax is as follows:
echo $((x+1))
Using the built-in shell command let. The syntax is as follows:
let x=( 1 + 2 ); echo $x
In modern shell scripts, the use of expr is better replaced with var=$((...)).
-ge Greater than or equal to
-le Less than or equal to
String Manipulation
"[[ string1 > string2 ]]" Compares the sorting order of string1 and string2.
"[[ string1 == string2 ]]" Compares the characters in string1 with the characters in string2.
"myLen1=${#string1}" Saves the length of string1 in the variable myLen1.
"${string:0:n}" To extract the first n characters of a string (0 is the character to begin from)
"${string#*.}" To extract characters in a string after a dot
CASE STATEMENT
"case expression in
pattern1) execute commands;;
pattern2) execute commands;;
pattern3) execute commands;;
pattern4) execute commands;;
* ) execute some default commands or nothing ;;
esac"
FOR LOOP
"for variable-name in list
do
execute one iteration for each item in the list until the list is finished
done"
WHILE LOOP
"while [true condition]
do
Commands for execution
----
done"
UNTIL LOOP
until [condition is false]
do
Commands for execution
----
done
script debugging
"bash -x ./scriptfile" to run bash in debug mode
-x14 +x16 to debug 15th line
file stream
Description File -->Descriptor
stdin Standard Input, by default the keyboard/terminal for programs run from the command line -->0
stdout Standard output, by default the screen for programs run from the command line -->1
stderr Standard error, where output error messages are shown or saved -->2
Redirect the stream, for example stderr (descriptor 2) to a file "bash sample.sh 2> error.txt" or "./sample.sh 2> error.txt" ">>" to append, as always
The XXXXXXXX is replaced by the mktemp utility with random characters to ensure the name of the temporary file cannot be easily predicted and is only known within your program.
"TEMP=$(mktemp /tmp/tempfile.XXXXXXXX)" To create a temporary file
"TEMPDIR=$(mktemp -d /tmp/tempdir.XXXXXXXX)" To create a temporary directory
It's a good habit
"TEMP=$(mktemp /tmp/tempfile.XXXXXXXX)
echo $VAR > $TEMP"
"/dev/null" Certain commands like find will produce voluminous amounts of output, which can overwhelm the console. To avoid this, we can redirect the large output to a special file (a device node) called /dev/null. Also called the bit bucket or black hole.
"ls -lR /tmp > /dev/null" the entire standard output stream is ignored, but any errors will still appear on the console.
"ls -lR /tmp >& /dev/null" both stdout and stderr will be dumped into /dev/null.
"$RANDOM" Random numbers can be generated by using the environment variable. "/dev/random" is used where very high quality randomness is required, such as one-time pad or key generation, but it is relatively slow to provide vaules. "/dev/urandom" is faster and suitable (good enough) for most cryptographic purposes.
PRINTING
The Linux standard for printing software is the Common UNIX Printing System (CUPS). It converts information from the application used to a language the printer can understand. It acts as a print server for local, as well as network printers.
"cupsd.conf" (network security) and "printers.conf" (printer-specific settings) are configuration file inside under the CUPS directory "/etc/cups/"
"ls -l /etc/cups/" full list of configuration files
"/var/spool/cups" stores print requests (d data files, c control files) - print queue
"/var/log/cups" stores log files, used by the scheduler to record activities that have taken place (access, error, page records). To view it "sudo ls -l /var/log/cups".
FILTERS convert job file format to printable formats.
DRIVERS contain descriptions for currently connected and configured printers ("/etc/cups/ppd/").
BACKEND helps to locate devices connected to the system.
"sudo service cups start"
"sudo service cups restart"
"sudo service cups status"
"sudo service cups stop"
"sudo chkconfig cups on" to configure cups required at boot time
"sudo chkconfig cups off" to configure cups as no longer required at boot time
For common USB printers, the "lsusb" utility will show a line for the printer.
The CUPS web interface is available on your browser at: http://localhost:631
"lp <filename>"To print the file to default printer
"lp -d printer <filename>" To print to a specific printer (useful if multiple printers are available)
"program | lp" or "echo string | lp" To print the output of a program
"lp -n number <filename>" To print multiple copies
"lpoptions -d printer" To set the default printer
"lpq -a" To show the queue status
"lpadmin" To configure printer queues
"lpoption" set printer option and defaults
"lpoptions help" to obtain a list of supported options
"lpstat -p -d" To get a list of available printers, along with their status
"lpstat -a" To check the status of all connected printers, including job numbers
"cancel <job-id>" or
"lprm <job-id>" To cancel a print job
"lpmove <job-id> <newprinter>" To move a print job to new printer
PostScript is a standard page description language.
enscript is a tool that is used to convert a text file to PostScript and other formats. It also supports Rich Text Format (RTF) and HyperText Markup Language (HTML). For example, you can convert a text file to two columns (-2) formatted PostScript using the command: "enscript -2 -r -p psfile.ps textfile.txt" This command will also rotate (-r) the output to print so the width of the paper is greater than the height (aka landscape mode) thereby reducing the number of pages required for printing.
"enscript -p psfile.ps textfile.txt" Convert a text file to PostScript (saved to psfile.ps)
"enscript -n -p psfile.ps textfile.txt" Convert a text file to n columns where n=1-9 (saved in psfile.ps)
"enscript textfile.txt" Print a text file directly to the default printer
"pdf2ps file.pdf" Converts file.pdf to file.ps
"ps2pdf file.ps" Converts file.ps to file.pdf
"pstopdf input.ps output.pdf" Converts input.ps to output.pdf
"pdftops input.pdf output.ps" Converts input.pdf to output.ps
"convert input.ps output.pdf" Converts input.ps to output.pdf
"convert input.pdf output.ps" Converts input.pdf to output.ps
As an alternative, there are pstopdf and pdftops which are usually part of the poppler package
Another possibility is to use the very powerful "convert" program, which is part of the ImageMagick package.
"pdftk" PDF Toolkit, can:
Merging/Splitting/Rotating PDF documents
Repairing corrupted PDF pages
Pulling single pages from a file
Encrypting and decrypting PDF files
Adding, updating, and exporting a PDF’s metadata
Exporting bookmarks to a text file
Filling out PDF forms.
"pdftk 1.pdf 2.pdf cat output 12.pdf" Merge the two documents 1.pdf and 2.pdf. The output will be saved to 12.pdf.
"pdftk A=1.pdf cat A1-2 output new.pdf" Write only pages 1 and 2 of 1.pdf. The output will be saved to new.pdf.
"pdftk A=1.pdf cat A1-endright output new.pdf" Rotate all pages of 1.pdf 90 degrees clockwise and save result in new.pdf.
"pdftk public.pdf output private.pdf user_pw PROMPT" a new file "private.pdf" will be create with the identical content as "public.pdf", but anyone will need to type the password to be able to view it
Ghostscript is widely available as an interpreter for the Postscript and PDF languages. The executable program associated with it is abbreviated to gs.
"gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=all.pdf file1.pdf file2.pdf file3.pdf" to combine two file into one
"gs -sDEVICE=pdfwrite -dNOPAUSE -dBATCH -dDOPDFMARKS=false -dFirstPage=10 -dLastPage=20\
-sOutputFile=split.pdf file.pdf" Split pages 10 to 20 out of a PDF file
You can use other tools, such as pdfinfo, flpsed, and pdfmod to work with PDF files.
pdfinfo can extract information about PDF files, especially when the files are very large or when a graphical interface is not available.
flpsed can add data to a PostScript document. This tool is specifically useful for filling in forms or adding short comments into the document.
pdfmod is a simple application that provides a graphical interface for modifying PDF documents. Using this tool, you can reorder, rotate, and remove pages; export images from a document; edit the title, subject, and author; add keywords; and combine documents using drag-and-drop action.
"pdfinfo /usr/share/doc/readme.pdf" to collect the details of a document
"/etc/passwd"
Username - User login name - Should be between 1 and 32 characters long
Password - User password (or the character x if the password is stored in the /etc/shadow file) in encrypted format - Is never shown in Linux when it is being typed; this stops prying eyes
User ID (UID) - Every user must have a user id (UID) - UID 0 is reserved for root user
UID's ranging from 1-99 are reserved for other predefined accounts
UID's ranging from 100-999 are reserved for system accounts and groups
Normal users have UID's of 1000 or greater
Group ID (GID) - The primary Group ID (GID); Group Identification Number stored in the /etc/group file - Is covered in detail in the chapter on Processes
User Info - This field is optional and allows insertion of extra information about the user such as their name - For example: Rufus T. Firefly
Home Directory - The absolute path location of user's home directory - For example: /home/rtfirefly
Shell - The absolute location of a user's default shell - For example: /bin/bash
Linux has four types of accounts:
root
System
Normal
Network
"last" shows the last time each user logged into the system, can be used to help identify potentially inactive accounts which are candidates for system removal.
SUID (Set owner User ID upon execution—similar to the Windows "run as" feature) is a special kind of file permission given to a file. SUID provides temporary permissions to a user to run a program with the permissions of the file owner (which may be root) instead of the permissions held by the user.
"passwd" Change password to users
"/etc/sudoers" "/etc/sudoers.d" keep track of unsuccessful attempts at gaining root access
<who where = (as_whom) what> format of a sudo user configuration, example FFinetti ALL=(ALL) ALL
To give sudo permission to an account, add this line to /etc/sudoers:
newuser ALL=(ALL) ALL
Alternatively, create a file named /etc/sudoers.d/newuser with just that one line as content.
"/var/log/secure" (system log file) message when trying to execute sudo bash without successfully authenticating the user (Calling username, Terminal info, Working directory, User account invoked, Command with arguments)
"visudo <file to edit>" which ensures that only one person is editing the file at a time
Additional security mechanisms that have been recently introduced in order to make risks even smaller are:
Control Groups (cgroups): Allows system administrators to group processes and associate finite resources to each cgroup.
Linux Containers (LXC): Makes it possible to run multiple isolated Linux systems (containers) on a single system by relying on cgroups.
Virtualization: Hardware is emulated in such a way that not only processes can be isolated, but entire systems are run simultaneously as isolated and insulated guests (virtual machines) on one physical host.
Hardware Device Access
Linux limits user access to non-networking hardware devices in a manner that is extremely similar to regular file access
This layer will then open a device special file (often called a device node) under the "/dev" directory that corresponds to the device being accessed. Each device special file has standard owner, group and world permission fields. Security is naturally enforced just as it is when standard files are accessed.
Hard disks, for example, are represented as /dev/sd*
The normal reading and writing of files on the hard disk by applications is done at a higher level through the filesystem, and never through direct access to the device node.
PASSWORD ALGORITHM
Most Linux distributions rely on a modern password encryption algorithm called SHA-512 (Secure Hashing Algorithm 512 bits), developed by the U.S. National Security Agency (NSA) to encrypt passwords.
The SHA-512 algorithm is widely used for security applications and protocols. These security applications and protocols include TLS, SSL, PHP, SSH, S/MIME and IPSec. SHA-512 is one of the most tested hashing algorithms.
"echo -n <word> | sha512sum" produce the SHA-512 form of the word
"chage --list <user>" list password expiry information. The same can be used it to set them
"sudo chage -E 2014-31-12 newuser" set data expire
Another method is to force users to set strong passwords using Pluggable Authentication Modules (PAM). PAM can be configured to automatically verify that a password created or modified using the passwd utility is sufficiently strong. PAM configuration is implemented using a library called pam_cracklib.so, which can also be replaced by pam_passwdqc.so for more options.
One can also install password cracking programs, such as John The Ripper, to secure the password file and detect weak password entries. It is recommended that written authorization be obtained before installing such tools on any system that you do not own.
When hardware is physically accessible, security can be compromised by:
Key logging: Recording the real time activity of a computer user including the keys they press. The captured data can either be stored locally or transmitted to remote machines.
Network sniffing: Capturing and viewing the network packet level data on your network.
Booting with a live or rescue disk
Remounting and modifying disk content.
The guidelines of security are:
Lock down workstations and servers.
Protect your network links such that it cannot be accessed by people you do not trust.
Protect your keyboards where passwords are entered to ensure the keyboards cannot be tampered with.
Ensure a password protects the BIOS in such a way that the system cannot be booted with a live or rescue DVD or USB key.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment