Home Php C# Sql C C++ Javascript Python Java Go Android Git Linux Asp.net Django .net Node.js Ios Xcode Cocoa Iphone Mysql Tomcat Mongodb Bash Objective-c Scala Visual-studio Apache Elasticsearch Jar Eclipse Jquery Ruby-on-rails Ruby Rubygems Android-studio Spring Lua Sqlite Emacs Ubuntu Perl Docker Swift Amazon-web-services Svn Html Ajax Xml Java-ee Maven Intellij-idea Rvm Macos Unix Css Ipad Postgresql Css3 Json Windows-server Vue.js Typescript Oracle Hibernate Internet-explorer Github Tensorflow Laravel Symfony Redis Html5 Google-app-engine Nginx Firefox Sqlalchemy Lucene Erlang Flask Vim Solr Webview Facebook Zend-framework Virtualenv Nosql Ide Twitter Safari Flutter Bundle Phonegap Centos Sphinx Actionscript Tornado Register | Login | Edit Tags | New Questions | 繁体 | 简体

10 questions online user: 48


Use find command but exclude files in two directories

I want to find files that end with _peaks.bed, but exclude files in the tmp and scripts folders.

My command is like this:

 find . -type f ( -name "*_peaks.bed" ! -name "*tmp*" ! -name "*scripts*" )

But it didn't work. The files in tmp and script folder will still be displayed.

Does anyone have ideas about this?

up vote 176 down vote accepted favorite

Here's how you can specify that with find:

find . -type f -name "*_peaks.bed" ! -path "./tmp/*" ! -path "./scripts/*"


  • find . - Start find from current working directory (recursively by default)
  • -type f - Specify to find that you only want files in the results
  • -name "*_peaks.bed" - Look for files with the name ending in _peaks.bed
  • ! -path "./tmp/*" - Exclude all results whose path starts with ./tmp/
  • ! -path "./scripts/*" - Also exclude all results whose path starts with ./scripts/

Testing the Solution:

$ mkdir a b c d e
$ touch a/1 b/2 c/3 d/4 e/5 e/a e/b
$ find . -type f ! -path "./a/*" ! -path "./b/*"


You were pretty close, the -name option only considers the basename, where as -path considers the entire path =)

幹得好。但是,您忘記了OP想要的一件事,找到以_peaks.bed結尾的文件。 - - alex 1月3日在2:44

@alex doh,好!固定=) - sampson-chen 2013年1月3日2:45

這在GNU find中使用了許多擴展,但由於問題標記為Linux,這不是問題。好答案。 - Jonathan Leffler 2013年1月3日2:47

簡短說明:如果您使用。在初始查找提示符下,您必須在排除的每個路徑中使用它。路徑匹配非常嚴格,不進行模糊搜索。因此,如果你使用find / -type f -name * .bed“!-path”./tmp/“它不會起作用。你需要!-path”/ tmp“才能讓它開心。 - peelman 11月12日'13在20:08

重要的是要注意*很重要。$!-path“./directory/*” - Thomas Bennett於2014年8月18日15:55


Here is one way you could do it...

find . -type f -name "*_peaks.bed" | egrep -v "^(./tmp/|./scripts/)"

這有使用任何版本的find的優點,而不僅僅是使用GNU find。但是,問題是標記為Linux,因此並不重要。 - Jonathan Leffler 2013年1月3日2:46


Try something like

find . ( -type f -name *_peaks.bed -print ) -or ( -type d -and ( -name tmp -or -name scripts ) -and -prune )

and don't be too surprised if I got it a bit wrong. If the goal is an exec (instead of print), just substitute it in place.


for me, this solution didn't worked on a command exec with find, don't really know why, so my solution is

find . -type f -path "./a/*" -prune -o -path "./b/*" -prune -o -exec gzip -f -v {} ;

Explanation: same as sampson-chen one with the additions of

-prune - ignore the proceding path of ...

-o - Then if no match print the results, (prune the directories and print the remaining results)

18:12 $ mkdir a b c d e
18:13 $ touch a/1 b/2 c/3 d/4 e/5 e/a e/b
18:13 $ find . -type f -path "./a/*" -prune -o -path "./b/*" -prune -o -exec gzip -f -v {} ;

gzip: . is a directory -- ignored
gzip: ./a is a directory -- ignored
gzip: ./b is a directory -- ignored
gzip: ./c is a directory -- ignored
./c/3:    0.0% -- replaced with ./c/3.gz
gzip: ./d is a directory -- ignored
./d/4:    0.0% -- replaced with ./d/4.gz
gzip: ./e is a directory -- ignored
./e/5:    0.0% -- replaced with ./e/5.gz
./e/a:    0.0% -- replaced with ./e/a.gz
./e/b:    0.0% -- replaced with ./e/b.gz

接受的答案不起作用,但這有效。使用修剪,找到。-path ./scripts -prune -n​​ame'* _peaks.bed'-type f。不確定如何排除多個目錄。即使指定了type,它也會列出頂級排除目錄。除非你想使用修剪來加速查找操作,否則通過Grep排除似乎更直接。 - Mohnish 17年9月29日19:23

我也很難排除多個目錄,但上面的評論給了我一個有效的答案。我使用'-not -path'的多個實例,並且在每個路徑表達式中,我包括在第一個參數中使用的完整前綴'find',並且每個都用星號結束(並且轉義任何點)。 - jetset 4月15日1:22


You can try below:

find ./ ! ( -path ./tmp -prune ) ! ( -path ./scripts -prune ) -type f -name '*_peaks.bed'

在這樣一個老問題上(4年!)你想解釋為什麼這個新答案更好或不同,而不僅僅是“轉儲”代碼。 - Nic3500 17年12月6日4:13


Please explain the exec() function and its family

What is the exec() function and its family? Why is this function used and how does its work?

Please anyone explain these functions.


Simplistically, in UNIX, you have the concept of processes and programs. A process is something in which a program executes.

The simple idea behind the UNIX "execution model" is that there are two operations you can do.

The first is to fork(), which creates a brand new process containing a duplicate of the current program, including its state. There are a few differences between the processes which allow them to figure out which is the parent and which is the child.

The second is to exec(), which replaces the program in the current process with a brand new program.

From those two simple operations, the entire UNIX execution model can be constructed.

To add some more detail to the above:

The use of fork() and exec() exemplifies the spirit of UNIX in that it provides a very simple way to start new processes.

The fork() call makes a near duplicate of the current process, identical in almost every way (not everything is copied over, for example, resource limits in some implementations, but the idea is to create as close a copy as possible). One process calls fork() while two processes return from it - sounds bizarre but it's really quite elegant

The new process (called the child) gets a different process ID (PID) and has the the PID of the old process (the parent) as its parent PID (PPID).

Because the two processes are now running exactly the same code, they need to be able to tell which is which - the return code of fork() provides this information - the child gets 0, the parent gets the PID of the child (if the fork() fails, no child is created and the parent gets an error code). That way, the parent knows the PID of the child and can communicate with it, kill it, wait for it and so on (the child can always find its parent process with a call to getppid()).

The exec() call replaces the entire current contents of the process with a new program. It loads the program into the current process space and runs it from the entry point.

So, fork() and exec() are often used in sequence to get a new program running as a child of a current process. Shells typically do this whenever you try to run a program like find - the shell forks, then the child loads the find program into memory, setting up all command line arguments, standard I/O and so forth.

But they're not required to be used together. It's perfectly acceptable for a program to call fork() without a following exec() if, for example, the program contains both parent and child code (you need to be careful what you do, each implementation may have restrictions). This was used quite a lot (and still is) for daemons which simply listen on a TCP port and fork a copy of themselves to process a specific request while the parent goes back to listening. For this situation, the program contains both the parent and the child code.

Similarly, programs that know they're finished and just want to run another program don't need to fork(), exec() and then wait()/waitpid() for the child. They can just load the child directly into their current process space with exec().

Some UNIX implementations have an optimized fork() which uses what they call copy-on-write. This is a trick to delay the copying of the process space in fork() until the program attempts to change something in that space. This is useful for those programs using only fork() and not exec() in that they don't have to copy an entire process space. Under Linux, fork() only makes a copy of the page tables and a new task structure, exec() will do the grunt work of "separating" the memory of the two processes.

If the exec is called following fork (and this is what happens mostly), that causes a write to the process space and it is then copied for the child process.

Linux also has a vfork(), even more optimised, which shares just about everything between the two processes. Because of that, there are certain restrictions in what the child can do, and the parent halts until the child calls exec() or _exit().

The parent has to be stopped (and the child is not permitted to return from the current function) since the two processes even share the same stack. This is slightly more efficient for the classic use case of fork() followed immediately by exec().

Note that there is a whole family of exec calls (execl, execle, execve and so on) but exec in context here means any of them.

The following diagram illustrates the typical fork/exec operation where the bash shell is used to list a directory with the ls command:

| pid=7  |
| ppid=4 |
| bash   |
    | calls fork
+--------+             +--------+
| pid=7  |    forks    | pid=22 |
| ppid=4 | ----------> | ppid=7 |
| bash   |             | bash   |
+--------+             +--------+
    |                      |
    | waits for pid 22     | calls exec to run ls
    |                      V
    |                  +--------+
    |                  | pid=22 |
    |                  | ppid=7 |
    |                  | ls     |
    V                  +--------+
+--------+                 |
| pid=7  |                 | exits
| ppid=4 | <---------------+
| bash   |
    | continues

Functions in the exec() family have different behaviours:

  • l : arguments are passed as a list of strings to the main()
  • v : arguments are passed as an array of strings to the main()
  • p : path/s to search for the new running program
  • e : the environment can be specified by the caller

You can mix them, therefore you have:

  • int execl(const char *path, const char *arg, ...);
  • int execlp(const char *file, const char *arg, ...);
  • int execle(const char *path, const char *arg, ..., char * const envp[]);
  • int execv(const char *path, char *const argv[]);
  • int execvp(const char *file, char *const argv[]);
  • int execvpe(const char *file, char *const argv[], char *const envp[]);

For all of them the initial argument is the name of a file that is to be executed.

For more information read exec(3) man page:

man 3 exec  # if you are running a UNIX system

The exec family of functions make your process execute a different program, replacing the old program it was running. I.e., if you call

execl("/bin/ls", "ls", NULL);

then the ls program is executed with the process id, current working dir and user/group (access rights) of the process that called execl. Afterwards, the original program is not running anymore.

To start a new process, the fork system call is used. To execute a program without replacing the original, you need to fork, then exec.


exec is often used in conjunction with fork, which I saw that you also asked about, so I will discuss this with that in mind.

exec turns the current process into another program. If you ever watched Doctor Who, then this is like when he regenerates -- his old body is replaced with a new body.

The way that this happens with your program and exec is that a lot of the resources that the OS kernel checks to see if the file you are passing to exec as the program argument (first argument) is executable by the current user (user id of the process making the exec call) and if so it replaces the virtual memory mapping of the current process with a virtual memory the new process and copies the argv and envp data that were passed in the exec call into an area of this new virtual memory map. Several other things may also happen here, but the files that were open for the program that called exec will still be open for the new program and they will share the same process ID, but the program that called exec will cease (unless exec failed).

The reason that this is done this way is that by separating running a new program into two steps like this you can do some things between the two steps. The most common thing to do is to make sure that the new program has certain files opened as certain file descriptors. (remember here that file descriptors are not the same as FILE *, but are int values that the kernel knows about). Doing this you can:

int X = open("./output_file.txt", O_WRONLY);

pid_t fk = fork();
if (!fk) { /* in child */
    dup2(X, 1); /* fd 1 is standard output,
                   so this makes standard out refer to the same file as X  */

    /* I'm using execl here rather than exec because
       it's easier to type the arguments. */
    execl("/bin/echo", "/bin/echo", "hello world");
    _exit(127); /* should not get here */
} else if (fk == -1) {
    /* An error happened and you should do something about it. */
    perror("fork"); /* print an error message */
close(X); /* The parent doesn't need this anymore */

This accomplishes running:

/bin/echo "hello world" > ./output_file.txt

from the command shell.


what is the exec function and its family.

The exec function family is all functions used to execute a file, such as execl, execlp, execle, execv, and execvp.They are all frontends for execve and provide different methods of calling it.

why is this function used

Exec functions are used when you want to execute (launch) a file (program).

and how does it work.

They work by overwriting the current process image with the one that you launched. They replace (by ending) the currently running process (the one that called the exec command) with the new process that has launched.

For more details: see this link.


The exec(3,3p) functions replace the current process with another. That is, the current process stops, and another runs instead, taking over some of the resources the original program had.

@JeremyP“相同的文件描述符”在這裡很重要,它解釋了重定向如何在shell中運行!如果exec覆蓋了所有內容,我很困惑重定向是如何工作的!謝謝 - FUD 2017年1月6日11:31




這是在我的.c文件中的代碼: enter image description here爲什麼要在struct之前添加標識符?我看不出爲什麼,如何解決這個問題?

這是錯誤: enter image description here


請張貼您的代碼。 –


如果你在這裏發帖,你應該至少發佈代碼,並給出關於當前代碼的問題和錯誤的確切描述。 –


實際的問題出現在'list.h'裏面,或者出現在你顯示的片段上面。我們需要看到_complete program_,作爲文本,否則我們將無法爲您提供幫助。請閱讀並按照https://stackoverflow.com/help/mcve上的說明進行操作。 – zwol



[file]:[line]:[column]: expected [punctuation] before [keyword] 


struct THING { } // oops! forgot a semicolon on this line 
struct OTHER { }; // compiler complains here, but the problem is up there 



#include "list.h" 
#define true 1 
#define false 0 
struct NODE *head; 

當編譯器解析struct NODE ...,該#define指令是不存在了,而#include "list.h"已被替換的文件list.h的內容。所以,可能是缺少分號或list.h內的任何內容。



感謝您的幫助!我明白你的意思! –



該文件末尾的內容很可能是導致錯誤的原因 - 例如,在結構定義結尾處缺少分號。


How can I send an email through the UNIX mailx command?

How can I send an email through the UNIX mailx command?


an example

$ echo "something" | mailx -s "subject" recipient@somewhere.com

to send attachment

$ uuencode file file | mailx -s "subject" recipient@somewhere.com

and to send attachment AND write the message body

$ (echo "something
" ; uuencode file file) | mailx -s "subject" recipient@somewhere.com

我嘗試過但是沒有回應。它既沒有給出一些錯誤信息,也沒有發送郵件到myname@gmail.com。是否需要任何服務器配置? - user269484 2010年2月18日4:58

不需要任何配置。檢查您的互聯網連接。我通過電纜直接連接到互聯網,我不使用代理或任何東西,所以它在我這邊工作。 - ghostdog74 2010年2月18日5:58

您還應該檢查收件箱中的錯誤消息。即運行郵件。 - hafichuk 12年12月31日在5:52

但請注意,uuencode是過去千年的遺留技術,它不會產生我們今天所說的“附件”。它基本上將一個機器可讀的混亂放在消息文本的末尾。在這個時代,正確的MIME感知郵件服務器可以為您提供更好的服務。不幸的是,沒有普遍支持的mailx替換MIME功能,但如果你有mutt,那可能是阻力最小的路徑。 - 2014年10月1日3:18

@user269484 Gmail doesn't accept email from unauthorised IP addresses. Read support.google.com/mail/answer/10336 – Manas Jayanth Jan 11 '16 at 18:07


Here you are :

echo "Body" | mailx -r "FROM_EMAIL" -s "SUBJECT" "To_EMAIL"

PS. Body and subject should be kept within double quotes. Remove quotes from FROM_EMAIL and To_EMAIL while substituting email addresses.

On Mac you will receive an error from the mailx command if you use -r mailx: illegal option -- r Usage: mailx [-EiInv] [-s subject] [-c cc-addr] [-b bcc-addr] [-F] to-addr ... mailx [-EHiInNv] [-F] -f [name] mailx [-EHiInNv] [-F] [-u user] mailx -e [-f name] mailx -H – jcpennypincher Apr 15 '16 at 19:36

you could do -S from=a@b.com – Kalpesh Soni Jun 8 '17 at 18:54

mailx -s "subjec_of_mail" abc@domail.com < file_name

through mailx utility we can send a file from unix to mail server. here in above code we can see first parameter is -s "subject of mail" the second parameter is mail ID and the last parameter is name of file which we want to attach

This doesn't attach the file, it puts the content of the file into the body – Guus Dec 17 '18 at 16:58


Its faster with MUTT command

echo "Body Of the Email"  | mutt -a "File_Attachment.csv" -s "Daily Report for $(date)"  -c cc_mail@g.com to_mail@g.com -y
  1. -c email cc list
  2. -s subject list
  3. -y to send the mail

From the man page:

Sending mail

To send a message to one or more people, mailx can be invoked with arguments which are the names of people to whom the mail will be sent. The user is then expected to type in his message, followed by an ‘control-D’ at the beginning of a line.

In other words, mailx reads the content to send from standard input and can be redirected to like normal. E.g.:

ls -l $HOME | mailx -s "The content of my home directory" someone@email.adr
mail [-s subject] [-c ccaddress] [-b bccaddress] toaddress

-c and -b are optional.

-s : Specify subject;if subject contains spaces, use quotes.

-c : Send carbon copies to list of users seperated by comma.

-b : Send blind carbon copies to list of users seperated by comma.

Hope my answer clarifies your doubt.

this accepts text, how can you end the mail body? – knocte Mar 11 '16 at 8:08

echo "Sending emails ..."
NOW=$(date +"%F %H:%M")
echo $NOW  " Running service" >> open_files.log
header=`echo "Service Restarting: " $NOW`

mail -s "$header" abc.xyz@google.com,   
              cde.mno@yahoo.com,  < open_files.log

Customizing FROM address


echo $MESSAGE | mail  -s "$SUBJECT" $TOADDR  -- -f $FROM

An except from man mail: -f [file] Read in the contents of the user's mbox (or the specified file) for processing; when mailx is quit, it writes undeleted messages back to this file. The string file is handled as described for the folder command below. – ZJ Lyu May 13 '17 at 3:48


If you want to send more than two person or DL :

echo "Message Body" | mailx -s "Message Title" -r sender@someone.com receiver1@someone.com,receiver_dl@.com


  • -s = subject or mail title
  • -r = sender mail or DL

Here is a multifunctional function to tackle mail sending with several attachments:

enviaremail() {
values=$(echo "$@" | tr -d '
heirloom-mailx $( attachment=""
for (( a = 5; a < ${#listargs[@]}; a++ )); do
attachment=$(echo "-a ${listargs[a]} ")
echo "${attachment}"
done) -v -s "${titulo}" 
-S smtp-use-starttls 
-S ssl-verify=ignore 
-S smtp-auth=login 
-S smtp=smtp://$1 
-S from="${2}" 
-S smtp-auth-user=$3 
-S smtp-auth-password=$4 
-S ssl-verify=ignore 
$5 < ${cuerpo}

function call: enviaremail "smtp.mailserver:port" "from_address" "authuser" "'pass'" "destination" "list of attachments separated by space"

Note: Remove the double quotes in the call

In addition please remember to define externally the $titulo (subject) and $cuerpo (body) of the email prior to using the function






WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_1.TXT (Header file) 
WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_2.TXT (data file) 
WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_3.TXT (trailer file) 

WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_1.TXT (Header file) 
WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_2.TXT (data file) 
WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_3.TXT (trailer file) 






所以你的意思是創建一個shell腳本?你有多遠?在你的問題中包含你的腳本。 – mattias


這將是'cat'的有用用法 –


我試過這個 ls | awk -F'_''!x [$ 1] ++ {print $ 1}'|同時讀取-r行 做 cat $ line >> $ line .txt 完成它創建具有正確數據的臨時文件,但我需要重命名文件名,如上所述,並刪除現有文件。只保留最終文件。 – VTIN




$ for f in WP*_?_?.TXT 
    echo "+++++++ $f" 
    cat $f 
    echo "" 

+++++++ WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_1.TXT 
2024916 header 

+++++++ WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_2.TXT 
2024916 data 

+++++++ WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3_3.TXT 
2024916 trailer 

+++++++ WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_1.TXT 
2024078 header 

+++++++ WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_2.TXT 
2024078 data 

+++++++ WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7_3.TXT 
2024078 trailer 


$ ls WP*_?_?.TXT | cut -d"_" -f1-8 | sort -u | while read -r fprefix 
    # concatenate source files 
    cat ${fprefix}_[123].TXT > ${fprefix}.TXT 

    # display concatenated files 
    echo "+++++++ ${fprefix}.TXT" 
    cat ${fprefix}.TXT 
    echo "" 

+++++++ WP2024078_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_3.TXT 
2024916 header 
2024916 data 
2024916 trailer 

+++++++ WP2024916_191_FACETS_DAILY_CLAIMS_EXTRACT_20171110094055_7.TXT 
2024078 header 
2024078 data 
2024078 trailer 

非常感謝你..它的工作,只有很小的變化,我做了刪除WP從LS命令,因爲我可以有文件名也以其他字母開頭..更多的事情,我想在這裏添加的是我想刪除3個源文件,並只保留最終文件..我可以刪除.. ..? – VTIN


如果'$ {fprefix}'是正確的,你可以刪除3x源文件,如:'rm $ {fprefix} _?_ ?. TXT' – markp


再次感謝您的大力幫助......您是天才! !你讓我今天一整天都感覺很好 !!!!!!這裏是我正在使用的最終代碼****************************************** ******* ls * _?_ ?. TXT | cut -d「_」-f1-8 | sort -u |而讀-r fprefix 做 #串連的源文件 貓$ {fprefix} _ [123] .TXT> $ {fprefix} .TXT #顯示級聯文件 回聲「+++++++ $ {fprefix } .TXT「 cat $ {fprefix} .TXT rm $ {fprefix} _ ?. TXT echo」「 done ******************** ********************** – VTIN

ls *_?_?.TXT | while read -r filename 
    # concatenate source files 
    cat $filename >> ${filename%_*}.TXT 
    rm $filename 




Linux: copy and create destination dir if it does not exist

I want a command (or probably an option to cp) that creates the destination directory if it does not exist.


cp -? file /path/to/copy/file/to/is/very/deep/there

Just to resume and give a complete working solution, in one line. Be careful if you want to rename your file, you should include a way to provide a clean dir path to mkdir. $fdst can be file or dir. Next code should work in any case.

mkdir -p $(dirname ${fdst}) && cp -p ${fsrc} ${fdst}

or bash specific

mkdir -p ${fdst%/*} && cp -p ${fsrc} ${fdst}

Just had the same issue. My approach was to just tar the files into an archive like so:

tar cf your_archive.tar file1 /path/to/file2 path/to/even/deeper/file3

tar automatically stores the files in the appropriate structure within the archive. If you run

tar xf your_archive.tar

the files are extracted into the desired directory structure.

rsync file /path/to/copy/file/to/is/very/deep/there

This might work, if you have the right kind of rsync.

這對我不起作用 - 我在目標dir上得到“沒有這樣的文件或目錄” - 富有2015年11月12日17:19


Copy from source to an non existing path

mkdir –p /destination && cp –r /source/ $_

NOTE: this command copies all the files

cp –r for copying all folders and its content

$_ work as destination which is created in last command



cp -a * /path/to/dst/

should do the trick.


Let's say you are doing something like

cp file1.txt A/B/C/D/file.txt

where A/B/C/D are directories which do not exist yet

A possible solution is as follows

DIR=$(dirname A/B/C/D/file.txt)
# DIR= "A/B/C/D"
mkdir -p $DIR
cp file1.txt A/B/C/D/file.txt

hope that helps!


How to convert DOS/Windows newline (CRLF) to Unix newline (LF) in a Bash script?

How can I programmatically (i.e., not using vi) convert DOS/Windows newlines to Unix?

The dos2unix and unix2dos commands are not available on certain systems. How can I emulate these with commands like sed/awk/tr?


This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip command, which was written over 20 years ago by Rahul Dhesi, the author of zoo. It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...

有沒有辦法以流式方式做到這一點,而無需修改原始文件? - augurar 2013年12月7日22:08

@augurar你可以檢查“類似包”package.debian.org/wheezy/flip - n611x007 2014年8月19日11:12

我只是通過運行帶有錯誤標誌的texxto來體驗破壞我操作系統的一半。如果要在整個文件夾中執行此操作,請特別小心。 - A_P 9月13日'18在13:21


The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator, while Unix uses LF as a line terminator. The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:

perl -pe 'if ( s/
/g ) { $f=1 }; if ( $f || ! $m ) { s/([^
/ }; $m=1' PCfile.txt

Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i flag.

RIP我的數據文件。xD - Ludovic Zenohate Lagouardette在2016年1月21日10點53分出了問題

@LudovicZenohateLagouardette它是純文本文件(即csv或tab-demited文本),還是其他什麼?如果它是某種數據庫格式,那麼將其操作就好像它是文本一樣,很可能會破壞其內部結構。 - 戈登戴維森2016年1月23日20:53

一個純文本csv,但我認為這很奇怪。我認為因此而搞砸了。不過不用擔心。我總是收集備份,這甚至不是真正的數據集,只有1gb。真實的是26gb。 - Ludovic Zenohate Lagouardette 2016年1月24日8:02


If you don't have access to dos2unix, but can read this page, then you can copy/paste dos2unix.py from here.

#!/usr/bin/env python
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
import sys

if len(sys.argv[1:]) != 2:

content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
  content = infile.read()
with open(sys.argv[2], 'wb') as output:
  for line in content.splitlines():
    outsize += len(line) + 1
    output.write(line + '

print("Done. Saved %s bytes." % (len(content)-outsize))

Cross-posted from superuser.

用法具有誤導性。真正的dos2unix默認轉換所有輸入文件。您的用法意味著-n參數。真正的dos2unix是一個從stdin讀取的過濾器,如果沒有給出文件,則寫入stdout。 - jfs 2015年7月6日11:32


You can use vim programmatically with the option -c {command} :

Dos to Unix:

vim file.txt -c "set ff=unix" -c ":wq"

Unix to dos:

vim file.txt -c "set ff=dos" -c ":wq"

"set ff=unix/dos" means change fileformat (ff) of the file to Unix/DOS end of line format

":wq" means write file to disk and quit the editor (allowing to use the command in a loop)

這似乎是最優雅的解決方案,但缺乏對wq意味著什麼的解釋是不幸的。 - Jorrick Sleijster 2月23日12:23

任何使用vi的人都會知道:wq的含義。對於那些沒有3個字符的人意味著1)打開vi命令區域,2)寫入和3)退出。 - David Newcomb 2月27日10:24

我不知道你可以從CLI交互式地向vim添加命令--Robert Dundon 4月4日13:24


Super duper easy with PCRE;

As a script, or replace $@ with your files.

#!/usr/bin/env bash
perl -pi -e 's/
/g' -- $@

This will overwrite your files in place!

I recommend only doing this with a backup (version control or otherwise)

謝謝!這有效,雖然我正在寫文件名而沒有 - 。我選擇這個解決方案是因為它易於理解並適應我。僅供參考,這是開關的作用:-p假設“while input”循環,-i編輯輸入文件,-e執行以下命令 - Rolf 10年11月11日在12:21

嚴格地說,PCRE是Perl的正則表達式引擎的重新實現,而不是Perl的正則表達式引擎。他們都有這種能力,雖然也有不同之處,儘管這個名字很有意義。 - 2017年10月27日8:24


To convert a file in place do

dos2unix <filename>

To output converted text to a different file do

dos2unix -n <input-file> <output-file>

It's already installed on Ubuntu and is available on homebrew with brew install dos2unix

I know the question explicitly asks for alternatives to this utility but this is the first google search result for "convert dos to unix line endings".


An even simpler awk solution w/o a program:

awk -v ORS='
' '1' unix.txt > dos.txt

Technically '1' is your program, b/c awk requires one when given option.

UPDATE: After revisiting this page for the first time in a long time I realized that no one has yet posted an internal solution, so here is one:

while IFS= read -r line;
do printf '%s
' "${line%$'
done < dos.txt > unix.txt

這很方便,但只是要清楚:這會轉換Unix - > Windows / DOS,這與OP要求的方向相反。 - mklement0 2015年2月28日6:01

這是故意的,留給作者練習。eyerolls awk -v RS ='''1'dos.txt> unix.txt - nawK 2015年3月1日4:14

偉大(並為你的教育技巧稱讚)。 - mklement0 2015年3月1日4:35

“b / c awk在給定選項時需要一個。” - awk總是需要一個程序,無論是否指定了選項。 - mklement0 2015年3月1日4:37

純粹的bash解決方案很有趣,但比同等的awk或sed解決方案慢得多。此外,您必須使用IFS = read -r行來忠實地保留輸入行,否則會修剪前導和尾隨空格(或者,在read命令中不使用變量名並使用$ REPLY)。 - mklement0 2015年3月1日6:14


interestingly in my git-bash on windows sed "" did the trick already:

$ echo -e "abc
" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text

My guess is that sed ignores them when reading lines from input and always writes unix line endings on output.


This worked for me

tr "
" "
" < sampledata.csv > sampledata2.csv 

這會將每個DOS換行符轉換為兩個UNIX換行符。 - Melebius 2015年8月4日6:11


Had just to ponder that same question (on Windows-side, but equally applicable to linux.) Suprisingly nobody mentioned a very much automated way of doing CRLF<->LF conversion for text-files using good old zip -ll option (Info-ZIP):

zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip 

NOTE: this would create a zip file preserving the original file names but converting the line endings to LF. Then unzip would extract the files as zip'ed, that is with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.

Relevant excerpt from the zip --help:

zip --help
-l   convert LF to CR LF (-ll CR LF to LF)


perl -pe 's/
/; s/([^
/ if eof' PCfile.txt

Based on @GordonDavisson

One must consider the possibility of [noeol] ...


For Mac osx if you have homebrew installed [http://brew.sh/][1]

brew install dos2unix

for csv in *.csv; do dos2unix -c mac ${csv}; done;

Make sure you have made copies of the files, as this command will modify the files in place. The -c mac option makes the switch to be compatible with osx.

dos2unix結果非常方便! - HelloGoodbye 2014年8月21日15:12

這個答案真的不是原始海報的問題。 - hlin117 2015年2月7日17:43

OS X用戶不應該使用-c mac,它用於轉換僅基於OS X CR的新行。您只想將該模式用於Mac OS 9或之前的文件。 - askewchan 2016年4月14日13:20


You can use awk. Set the record separator (RS) to a regexp that matches all possible newline character, or characters. And set the output record separator (ORS) to the unix-style newline character.

awk 'BEGIN{RS="

"}{print}' windows_or_macos.txt > unix.txt

這是對我有用的那個(MacOS,git diff顯示^ M,在vim中編輯) - Dorian Mar 1 '17 at 9:17


On Linux it's easy to convert ^M (ctrl-M) to *nix newlines (^J) with sed.

It will something like this on the CLI, there will actually be a line break in the text. However, the passes that ^J along to sed:

sed 's/^M/
/g' < ffmpeg.log > new.log

You get this by using ^V (ctrl-V), ^M (ctrl-M) and (backslash) as you type:

sed 's/^V^M/^V^J/g' < ffmpeg.log > new.log

As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:

sed '/^M$/! s/$/^M/'

This checks that the line does not already end in CRLF before converting to CRLF.

sed --expression='s/

Since the question mentions sed, this is the most straight forward way to use sed to achieve this. What the expression says is replace all carriage-return and line-feed with just line-feed only. That is what you need when you go from Windows to Unix. I verified it works.

嘿約翰保羅 - 這個答案被標記為刪除,所以出現在我的審查隊列中。一般來說,當你有一個8歲的問題,有22個答案時,你會想要解釋你的答案是如何有用的,而其他現有的答案卻沒有。 - zzxyz 18年8月18日22:34


I made a script based on the accepted answer so you can convert it directly without needing an additional file in the end and removing and renaming afterwards.

convert-crlf-to-lf() {
    tr -d '15' <"$file" >"$file"2
    rm -rf "$file"
    mv "$file"2 "$file"

just make sure if you have a file like "file1.txt" that "file1.txt2" doesn't already exist or it will be overwritten, I use this as a temporary place to store the file in.


I tried sed 's/^M$//' file.txt on OSX as well as several other methods (http://www.thingy-ma-jig.co.uk/blog/25-11-2010/fixing-dos-line-endings or http://hintsforums.macworld.com/archive/index.php/t-125.html). None worked, the file remained unchanged (btw Ctrl-v Enter was needed to reproduce ^M). In the end I used TextWrangler. Its not strictly command line but it works and it doesn't complain.


There are plenty of awk/sed/etc answers so as a supplement (since this is one of the top search results for this issue):

You may not have dos2unix but do you have iconv?

iconv -f UTF-16LE -t UTF-8 [filename.txt]
-f from format type
-t to format type

Or all files in a directory:

find . -name "*.sql" -exec iconv -f UTF-16LE -t UTF-8 {} -o ./{} ;

This runs the same command, on all .sql files in the current folder. -o is the output directory so you can have it replace the current files, or, for safety/backup reasons, output to a separate directory.

這嘗試實現從UTF-16LE到UTF-8的編碼轉換,但它不接觸行結尾。它與被問到的問題無關。 - Palec 2010年10月13日13:36

我的錯。我將驗證這一點,但是,我剛剛使用THAT DAY修復了我的文件沒有運行grep的問題,因為它們是Windows格式化的。 - Katastic Voyage 17年10月14日17:34

這也是一個常見的問題,但不是OP所要求的問題(並且比CRLF問題少得多)。 - 2017年10月27日8:22





  • 進程狀態
  • 指針
  • 進程大小
  • 用戶ID
  • 進程ID
  • 事件描述
  • 優先

您可以運行:'男人ps' – alfasin


究竟你甚至「指針」和「事件描述」是什麼意思?這是您從某處複製的列表,還是您只是猜測流程表應包含哪些內容? – duskwuff


的「進程表」這樣的生活內核的內存。一些系統(如AIX,Solaris和Linux--不是「Unix」)有一個/proc文件系統,這些文件系統使這些表對普通程序可見。沒有這些,程序如ps(在SunOS 4等非常老的系統上)需要較高的權限才能讀取/dev/kmem(內核內存)特殊設備,並且具有關於內核內存佈局的詳細知識。


但列出進程的命令是'ps';在Linux上,你也可以使用'top'或'htop',它們都訪問'/ proc /' –


我提到了ps(最常見的例子)。頂部會是類似的。在HPUX的快速檢查中,雖然沒有/ proc,但ps和top不是setuid(有待進一步檢查)。所以我將其改爲SunOS 4 :-) –




ps代表'process status',回答你的第一個項目符號。但是該命令使用了超過30個選項,並且取決於您尋求的信息以及系統管理員授予您的權限,您可以從命令中獲取各種類型的信息。例如,對於上面列表中的第二個項目符號,根據您要查找的內容,您可以獲得3種不同類型的指針的信息 - 會話指針(帶有選項'sess'),終端會話指針(tsess)和進程指針(uprocp)。


某些UNIX變體實現了文件系統內部系統進程表的視圖,以支持諸如ps之類的程序的運行。這通常安裝在/ proc上(請參閱上面的@ThomasDickey響應)




向遠程主機發出bash命令 - 寫入本地輸出文件時出錯


我試圖在幾臺遠程主機上並行運行多組命令。 我已經創建了構建這些命令,然後在本地文件沿線的寫入輸出,事情的腳本:向遠程主機發出bash命令 - 寫入本地輸出文件時出錯

ssh <me>@<ip1> "command" 2> ./path/to/file/newFile1.txt & ssh <me>@<ip2> 
"command" 2> ./path/to/file/newFile2.txt & ssh <me>@<ip2> "command" 2> 
./path/to/file/newFile3.txt; ...(same repeats itself, with new commands and new 
file names)... 


bash: ./path/to/file/newFile1.txt: No such file or directory 
bash: ./path/to/file/newFile2.txt: No such file or directory 
bash: ./path/to/file/newFile3.txt: No such file or directory 




編輯 - 詳細信息:


- home 
    - User 
    - Desktop 
     - Servers 
     - Outputs 
      - ... 

我從家裏/用戶/桌面運行bash腳本/服務器。 該腳本創建需要在遠程服務器上運行的命令。首先,腳本創建文件將被存儲的目錄。

mkdir -p ${outputFolder}/f{fileNumb} 

腳本然後繼續創建將遙控器上的主機被稱爲命令,並且它們各自的輸出將被放置在所創建的目錄中。 目錄在那裏。運行這些命令會給我帶來錯誤,但是打印然後將這些命令複製到同一位置會出於某種原因。我也試圖給出目錄的完整路徑,仍然是同樣的問題。



必須存在重定向才能工作的路徑(中間目錄不是自動創建的)。所以'mkdir -p path/to/file'在重定向'> path/to/file/newFile.txt'之前。 –


您的問題似乎並未包含足夠的相關信息以允許投機以外的任何內容。請提出問題的[最小,完整和可驗證的示例](https://stackoverflow.com/help/mcve),以便所有相關信息(以及最小不相關信息)都在這裏。 –


你的問題*仍然*沒有包含足夠的信息來真實地告訴發生了什麼 - 這就是創建一個MCVE的要點。但是你提到的「創建命令」讓我感到懷疑 - 你是否試圖在執行它們之前將命令存儲在變量中?如果是這樣,有很多事情可能會出錯。參見[BashFAQ#50:我試圖把一個命令放在一個變量中,但複雜的情況總是失敗!](http://mywiki.wooledge.org/BashFAQ/050)。 –



bash: ./path/to/file/newFile1.txt: No such file or directory 

然後你會注意到,有冒號和點之間的額外的空間,所以它實際上是試圖打開一個名爲" ./path/to/file/newFile1.txt"文件(不引號)。


something ... 2> " ./path/to/file/newFile1.txt" 



是個好主意,但我再次檢查過,並且有一個空間。 – dtam







當我複製並粘貼它時,我在與正在運行的腳本相同的位置執行此操作。如果有幫助,我已經編輯了一些更多信息的原始文章。 – dtam





eval $cmd 



What is the simplest way to SSH using Python?

How can I simply SSH to a remote server from a local Python (3.0) script, supply a login/password, execute a command and print the output to the Python console?

I would rather not use any large external library or install anything on the remote server.

up vote 39 down vote accepted favorite

I haven't tried it, but this pysftp module might help, which in turn uses paramiko. I believe everything is client-side.

The interesting command is probably .execute() which executes an arbitrary command on the remote machine. (The module also features .get() and .put methods which allude more to its FTP character).


I've re-written the answer after the blog post I originally linked to is not available anymore. Some of the comments that refer to the old version of this answer will now look weird.

很好找!只要您不關心自定義錯誤響應,這種額外的抽象將非常有用。 - Cascabel 09年8月5日14:52

ssh模塊完成了這個伎倆。簡單,工作正常。沒有搜索Paramiko API。 - 克里斯托弗託卡爾09年8月6日15:20

你給的鏈接裡面的ssh.py文件的鏈接壞了:/ - dgorissen 2011年6月29日13:15

是的,請給我們一個新的鏈接。我在github上找到ssh.py,但它不一樣(並不是那麼好) - jdborg 2011年9月19日16:18

pysftp包只提供SFTP。遠離SSH客戶端。 - bortzmeyer 2011年10月13日13:27


You can code it yourself using Paramiko, as suggested above. Alternatively, you can look into Fabric, a python application for doing all the things you asked about:

Fabric is a Python library and command-line tool designed to streamline deploying applications or performing system administration tasks via the SSH protocol. It provides tools for running arbitrary shell commands (either as a normal login user, or via sudo), uploading and downloading files, and so forth.

I think this fits your needs. It is also not a large library and requires no server installation, although it does have dependencies on paramiko and pycrypt that require installation on the client.

The app used to be here. It can now be found here.

* The official, canonical repository is git.fabfile.org
* The official Github mirror is GitHub/bitprophet/fabric

There are several good articles on it, though you should be careful because it has changed in the last six months:

Deploying Django with Fabric

Tools of the Modern Python Hacker: Virtualenv, Fabric and Pip

Simple & Easy Deployment with Fabric and Virtualenv

Later: Fabric no longer requires paramiko to install:

$ pip install fabric
Downloading/unpacking fabric
  Downloading Fabric-1.4.2.tar.gz (182Kb): 182Kb downloaded
  Running setup.py egg_info for package fabric
    warning: no previously-included files matching '*' found under directory 'docs/_build'
    warning: no files found matching 'fabfile.py'
Downloading/unpacking ssh>=1.7.14 (from fabric)
  Downloading ssh-1.7.14.tar.gz (794Kb): 794Kb downloaded
  Running setup.py egg_info for package ssh
Downloading/unpacking pycrypto>=2.1,!=2.4 (from ssh>=1.7.14->fabric)
  Downloading pycrypto-2.6.tar.gz (443Kb): 443Kb downloaded
  Running setup.py egg_info for package pycrypto
Installing collected packages: fabric, ssh, pycrypto
  Running setup.py install for fabric
    warning: no previously-included files matching '*' found under directory 'docs/_build'
    warning: no files found matching 'fabfile.py'
    Installing fab script to /home/hbrown/.virtualenvs/fabric-test/bin
  Running setup.py install for ssh
  Running setup.py install for pycrypto
Successfully installed fabric ssh pycrypto
Cleaning up...

This is mostly cosmetic, however: ssh is a fork of paramiko, the maintainer for both libraries is the same (Jeff Forcier, also the author of Fabric), and the maintainer has plans to reunite paramiko and ssh under the name paramiko. (This correction via pbanka.)

由於這似乎是一個有趣的鏈接,我想更新它,因為你現在已經破了。請使用:clemesha.org/blog / ... - dlewin 12月15日12:00在15:00

謝謝。修復了404鏈接。 - hughdbrown 12年2月15日16:22

提問者沒有說明他不想使用“大型外部圖書館”嗎?當所有作者真正要求的是一個簡單的一次性ssh食譜時,Paramiko和Fabric都是矯枉過正。 - Zoran Pavlovic於2012年8月15日14:18

@Zoran Pavlovic:所有答案都是安裝本地軟件包(paramiko,fabric,ssh,libssh2)或者使用子進程來運行ssh。後者是一個無法安裝的解決方案,但我不認為產生ssh是個好主意,因為他選擇安裝ssh模塊的答案後也沒有OP。那些文檔說:“ssh.py提供了三種常見的SSH操作,get,put和execute。它是Paramiko的高級抽象。” 因此,除非你喜歡編碼很重的libssh2,否則沒有一致的建議。當OP的條件無法合理滿足時,我傾向於給出一個好的解決方案。 - hughdbrown 2012年8月30日21:35


If you want to avoid any extra modules, you can use the subprocess module to run

ssh [host] [command]

and capture the output.

Try something like:

process = subprocess.Popen("ssh example.com ls", shell=True,
    stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output,stderr = process.communicate()
status = process.poll()
print output

To deal with usernames and passwords, you can use subprocess to interact with the ssh process, or you could install a public key on the server to avoid the password prompt.

但是,如果客戶端在Windows上呢? - Nathan 2010年7月8日14:05

通過管道向ssh子進程提供密碼可能很困難。請參閱為什麼不使用管道(popen())?您可能需要pty,pexpect模塊來解決它。 - jfs 2014年2月19日11:58

似乎不適用於字符串'ssh somecomputer; python -c“import numpy; print numpy .__ version__”'它說它不知道命令“import” - 使用死亡之星2014年4月18日在7:50

@usethedeathstar:用引號包裝整個遠程命令:ssh somecomputer'python -c“import this; print this”' - Neil Apr 22 '14 at 17:32


I have written Python bindings for libssh2. Libssh2 is a client-side library implementing the SSH2 protocol.

import socket
import libssh2

sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect(('exmaple.com', 22))

session = libssh2.Session()
session.userauth_password('john', '******')

channel = session.channel()
channel.execute('ls -l')

print channel.read(1024)

看起來很低級。例如(您自己的示例),您必須明確地說您使用IPv4或IPv6(您不必使用OpenSSH命令行客戶端)。另外,我沒有找到如何使用ssh-agent。 - bortzmeyer 2011年10月13日13:24

關於pylibssh2的好處是它傳輸文件的速度比任何像sram一樣的ssh的本地python實現都要快。 - 達米恩12月3日在18:13


Your definition of "simplest" is important here - simple code means using a module (though "large external library" is an exaggeration).

I believe the most up-to-date (actively developed) module is paramiko. It comes with demo scripts in the download, and has detailed online API documentation. You could also try PxSSH, which is contained in pexpect. There's a short sample along with the documentation at the first link.

Again with respect to simplicity, note that good error-detection is always going to make your code look more complex, but you should be able to reuse a lot of code from the sample scripts then forget about it.


Like hughdbrown, I like Fabric. Please notice that while it implement its own declarative scripting (for making deploys and the such) it can also be imported as a Python module and used on your programs without having to write a Fabric script.

Fabric has a new maintainer and is in the process of being rewriten; that means that most tutorials you'll (currently) find on the web will not work with the current version. Also, Google still shows the old Fabric page as the first result.

For up to date documentation you can check: http://docs.fabfile.org

Fabric使用paramiko pypi.python.org/pypi/ssh的分支來表示所有ssh內容。 - 達米恩12月3日在18:15


I found paramiko to be a bit too low-level, and Fabric not especially well-suited to being used as a library, so I put together my own library called spur that uses paramiko to implement a slightly nicer interface:

import spur

shell = spur.SshShell(hostname="localhost", username="bob", password="password1")
result = shell.run(["echo", "-n", "hello"])
print result.output # prints hello

You can also choose to print the output of the program as it's running, which is useful if you want to see the output of long-running commands before it exits:

result = shell.run(["echo", "-n", "hello"], stdout=sys.stdout)

不支持運行非標準命令,例如在某些路由器(MikroTik)命令前綴為“/”時,此庫會引發錯誤。對於標準的Linux主機,它似乎相當不錯。 - Babken Vardanyan 2014年7月31日3:33

當我將IP地址傳遞給主機名時,它會拋出錯誤,說明在known_hosts中找不到IP ... - rexbelia 2016年11月28日19:58

@rexbelia這是SSH的正常行為:為了確保您正在與正確的服務器通信,SSH只接受來自主機的密鑰(如果已知)。解決方案是將相關密鑰添加到known_hosts,或者將missing_host_key參數設置為適當的值,如文檔中所述。 - 邁克爾威廉姆森2016年12月1日22:37


For benefit of those who reach here googling for python ssh sample. The original question and answer are almost a decode old now. It seems that the paramiko has gain a bit of functionalities (Ok. I'll admit - pure guessing here - I'm new to Python) and you can create ssh client directly with paramiko.

import base64
import paramiko

client = paramiko.SSHClient()

client.connect('', username='user', password='password')
stdin, stdout, stderr = client.exec_command('cat /proc/meminfo')
for line in stdout:
    print('... ' + line.strip('

This code was adapted from demo of https://github.com/paramiko/paramiko It works for me.

謝謝,像魅力一樣! - Valmond 18年8月28日8:44


This worked for me

import subprocess
import sys

def passwordless_ssh(HOST):
        ssh = subprocess.Popen(["ssh", "%s" % HOST, COMMAND],
        result = ssh.stdout.readlines()
        if result == []:
                error = ssh.stderr.readlines()
                print >>sys.stderr, "ERROR: %s" % error
                return "error"
                return result

please refer to paramiko.org, its very useful while doing ssh using python.

import paramiko

import time

ssh = paramiko.SSHClient() #SSHClient() is the paramiko object

'''Below lines adds the server key automatically to know_hosts file.use anyone one of the below'''




Here we are actually connecting to the server.

ssh.connect('', port=22, username='admin', password='')


I have mentioned time because some servers or endpoint prints there own information after loggin in e.g. the version, model and uptime information, so its better to give some time before executing the command.

Here we execute the command, stdin for input, stdout for output, stderr for error

stdin, stdout, stderr = ssh.exec_command('xstatus Time')

Here we are reading the lines from output.

output = stdout.readlines() 


Below all are the Exception handled by paramiko while ssh. Refer to paramiko.org for more information about exception.

except (BadHostKeyException, AuthenticationException,
SSHException, socket.error) as e:


Have a look at spurplus, a wrapper around spur and paramiko that we developed to manage remote machines and perform file operations.

Spurplus provides a check_output() function out-of-the-box:

import spurplus
with spurplus.connect_with_retries(
        hostname='some-machine.example.com', username='devop') as shell:
     out = shell.check_output(['/path/to/the/command', '--some_argument'])