Home Php C# Sql C C++ Javascript Python Java Go Android Git Linux Asp.net Django .net Node.js Ios Xcode Cocoa Iphone Mysql Tomcat Mongodb Bash Objective-c Scala Visual-studio Apache Elasticsearch Jar Eclipse Jquery Ruby-on-rails Ruby Rubygems Android-studio Spring Lua Sqlite Emacs Ubuntu Perl Docker Swift Amazon-web-services Svn Html Ajax Xml Java-ee Maven Intellij-idea Rvm Macos Unix Css Ipad Postgresql Css3 Json Windows-server Vue.js Typescript Oracle Hibernate Internet-explorer Github Tensorflow Laravel Symfony Redis Html5 Google-app-engine Nginx Firefox Sqlalchemy Lucene Erlang Flask Vim Solr Webview Facebook Zend-framework Virtualenv Nosql Ide Twitter Safari Flutter Bundle Phonegap Centos Sphinx Actionscript Tornado Register | Login | Edit Tags | New Questions | 繁体 | 简体


10 questions online user: 42

0
votes
answers
12 views
+10

爲什麼我不將shell命令的結果保存到我的變量中?

0

我以前做過這個。爲什麼我不將shell命令的結果保存到我的變量中?

我已經在這個論壇上看過很多文章,並且更多地關於如何將shell命令的結果保存到變量中。所有的人說這樣做

VAR="$(shell_command)" 
echo $VAR 

VAR=`shell_command` 
echo $VAR 

但我想這樣做

VAR="$(python2.7 -V)" 
echo "Version is $VAR" 

VAR=`python2.7 -V` 
echo "Version is $VAR" 

,我看到

Python 2.7.14 
Version is 

IOW我沒有存儲結果?爲什麼是這樣?我只是想純粹的bash,並且想要理解爲什麼它沒有做我期望的事情。謝謝!

沙发
0
1

在這種特殊情況下,這是因爲Python將版本打印到其標準錯誤流。 $(...)構建(或反引號)僅捕獲給定命令發送給標準輸出的內容。

在這種情況下,您可以通過編寫$(python2.7 -V 2>&1)來解決此問題。這裏的2>&1是shell代碼,意思是「用標準輸出流的副本替換標準錯誤流」,所以任何Python 認爲它寫入標準錯誤實際上是到達標準輸出的目的地。

請注意,在某些情況下,不正確使用引號會導致類似的問題。總的來說這是一個好主意,圍繞命令替換用雙引號:

VAR="$(python2.7 -V 2>&1)" 

原來不是在這種情況下,雖然關係。

板凳
0
1

試試這個:

python2.7 -V >/dev/null 

之後,你仍然看到輸出,這意味着版本信息沒有發送到標準輸出(標準輸出)。

這:

python2.7 -V 2>/dev/null 

輸出消失,進一步證實了它發送到標準錯誤。

所以,你想做到這一點:

VAR="$(python2.7 -V 2>&1)" 
#     ^^^^ 
# Redirect stderr to stdout 

這對我的作品。

0
votes
answers
36 views
+10

shell腳本重新啓動屏幕內的django runserver

0

如何使用shell腳本重新啓動django runserver?我在屏幕上運行了django服務器。shell腳本重新啓動屏幕內的django runserver

這是我的shell腳本restartpython.sh:

killall -9 python 
screen -r 
sleep 5 
exec python manage.py runserver 0.0.0.0:8000 
ctrl+a d   # how to make this into shell script?? 

當我執行這個腳本,我進入屏幕和蟒蛇服務器被殺害。但劇本沒有跑這條線:

exec python manage.py runserver 0.0.0.0:8000 

它顯示了這個錯誤:

python: can't open file 'manage.py': [Errno 2] No such file or directory 

此外,如何運行CTRL +(從屏幕退出)在shell腳本中的d?謝謝。

+0

這聽起來像是一個黑客一起僞生產配置。 ['runserver'不能用於生產](https://docs.djangoproject.com/en/1.10/ref/django-admin/#runserver)。你想做什麼? – Chris

+0

@Chris我試圖做一個腳本來重新啓動django runserver。因爲,Django runserver有時會卡住。我現在仍處於開發階段。我會用其他方法來製作。 – Krisnadi

+0

通常我只是運行'python manage.py'並讓它在一個開放的終端窗口中運行。有沒有什麼特別的原因可以通過在屏幕上添加'screen'和一個自定義的shell腳本來使事情複雜化? (請注意,'killall -9 python'幾乎肯定不是應該在那裏的東西,而且你可能不需要'exec'或者......) – Chris

沙发
0
0

而不是運行服務器,使用gunicornkill -HUP重新加載您的服務器。

+0

這意味着放棄開發服務器的自動重新加載功能... – Chris

+0

請參閱http://superuser.com/questions/181517/how-to-execute-a-command-whenever-a-file-changes – Udi

0
votes
answers
31 views
+10

如何將linux bash腳本文件添加到terraform代碼中?

-3

我的需求是我需要使用terraform創建3個aws實例並在其中運行3個不同的bash腳本。所有文件都在同一臺服務器上。如何將linux bash腳本文件添加到terraform代碼中?

我已經有terraform代碼來創建一個基礎架構和3 bash腳本準備使用。

resource "aws_instance" "master" { 
    instance_type = "t2.xlarge" 
    ami = "${data.aws_ami.ubuntu.id}" 
    key_name = "${aws_key_pair.auth.id}" 
    vpc_security_group_ids = ["${aws_security_group.public.id}"] 
    subnet_id = "${aws_subnet.public1.id}" 
} 

這是我terraform代碼來創建一個AWS實例

,但我不知道我怎麼可以整合兩者。

我也可以使用Aws實例IP值作爲Linux bash腳本中的變量值嗎?如果是的話,我該如何將這個ip值傳遞給我的一個linux bash腳本變量? 謝謝

+0

你嘗試過這麼遠嗎?你能包含你的代碼嗎? – Vandal

+0

terraform中的本地執行但它沒有工作。 –

+1

您是否在使用多個帳戶來編輯相同的帖子?不是最好的主意,現在你的編輯必須經過審查。 –

沙发
0
1

如果您只需要運行腳本一次;那麼與AWS'user-data scripts搭配就完美了。

將您的腳本放入文件templates/user_data.tpl,使用template provider來創建模板。然後,您只需要將呈現的腳本傳遞給aws_instance資源的user_data參數。

根據需要進行修改。

模板/ user_data.tpl

#!/bin/bash 
echo ${master_ip} 

terraform_file.tf

resource "aws_instance" "master" { 
    instance_type   = "t2.xlarge" 
    ami     = "${data.aws_ami.ubuntu.id}" 
    key_name    = "${aws_key_pair.auth.id}" 
    vpc_security_group_ids = ["${aws_security_group.public.id}"] 
    subnet_id    = "${aws_subnet.public1.id}" 
} 

resource "aws_instance" "slave" { 
    instance_type   = "t2.xlarge" 
    ami     = "${data.aws_ami.ubuntu.id}" 
    key_name    = "${aws_key_pair.auth.id}" 
    vpc_security_group_ids = ["${aws_security_group.public.id}"] 
    subnet_id    = "${aws_subnet.public1.id}" 

    user_data = "${data.template.user_data.rendered}" 
} 

data "template" "user_data" { 
    template = "${file("templates/user_data.tpl")}" 

    vars { 
    master_ip = "${aws_instance.master.private_ip}" 
    } 
} 
+0

這正是我需要..謝謝 –

0
votes
answers
17 views
+10

Bash:啓動並殺死子進程

3

我有一個程序,我想開始。讓我們說這個程序會運行一段時間(真)-loop(所以它不會終止我想寫一個bash腳本其中:Bash:啓動並殺死子進程

  1. 啓動程序(./endlessloop &
  2. 等待1秒(sleep 1
  3. 害死程序 - ?!>如何

我不能使用$,因爲服務器運行大量實例的同時,從孩子得到PID

+1

你試過了什麼代碼?它是如何失敗的? – CAB

+1

我嘗試了上面括號中的兩件事。問題是:如何殺死無盡環境? –

沙发
0
2

Store中的PID:

./endlessloop & endlessloop_pid=$! 
sleep 1 
kill "$endlessloop_pid" 

您還可以檢查該進程是否仍與kill -0運行:

if kill -0 "$endlessloop_pid"; then 
    echo "Endlessloop is still running" 
fi 

...和存儲變量中的內容意味着它可以擴展到多個進程:

endlessloop_pids=()      # initialize an empty array to store PIDs 
./endlessloop & endlessloop_pids+=("$!") # start one in background and store its PID 
./endlessloop & endlessloop_pids+=("$!") # start another and store its PID also 
kill "${endlessloop_pids[@]}"    # kill both endlessloop instances started above 

參見BashFAQ #68,「我如何運行一個命令,並將其N秒後中止(超時)?」

ProcessManagement頁面上的Wooledge維基還討論了相關的最佳實踐。

+0

但我不想殺死所有無盡循環過程,只是一個開始... –

+0

@今天春天,這個答案就是這樣。 '$ endlessloop_pid'只存儲已啓動的PID或者'endlessloop_pids'數組情況下的PID。 –

+0

關鍵是我無法區分哪一個是正確的要殺人。 –

173
votes
answers
13 views
+10

How to properly nest Bash backticks

Either I missed some backlash or backlashing does not seem to work with too much programmer-quote-looping.

$ echo "hello1-`echo hello2-`echo hello3-`echo hello4```"

hello1-hello2-hello3-echo hello4

Wanted

hello1-hello2-hello3-hello4-hello5-hello6-...
up vote 123 down vote accepted favorite
沙发
+1230
+50

Use $(commands) instead:

$ echo "hello1-$(echo hello2-$(echo hello3-$(echo hello4)))"
hello1-hello2-hello3-hello4

$(commands) does the same thing as backticks, but you can nest them.

You may also be interested in Bash range expansions:

echo hello{1..10}
hello1 hello2 hello3 hello4 hello5 hello6 hello7 hello8 hello9 hello10

像{1..10}一樣+1。用數組限制它?ZSH可以“$ {$(date)[2,4]}”。為什麼不:“echo $ {echo hello1 - $(echo hello2)[1]}”? - 嗯,2010年4月17日11:03

這不會為每個嵌套命令創建子shell嗎? - Jordan Mackie 18年7月4日9:49

+310

if you insist to use backticks, following could be done

$ echo "hello1-`echo hello2-`echo hello3-\`echo hello4\```"

you have to put backslashes, \ \\ \\\\ by 2x and so on, its just very ugly, use $(commands) as other suggested.

這實際上回答了問題 - Justas 12月23日在20:06

+100

Any time you want to evaluate a command use command substitution:

$(command)

Any time you want to evaluate an arithmetic expression use expression substitution:

$((expr))

You can nest these like this:

Let's say file1.txt is 30 lines long and file2.txt is 10 lines long, than you can evaluate an expression like this:

$(( $(wc -l file1.txt) - $(wc -l file2.txt) ))

which would output 20 ( the difference in number of lines between two files).

+90

It's a lot easier if you use bash's $(cmd) command substitution syntax, which is much more friendly to being nested:

$ echo "hello1-$(echo hello2-$(echo hello3-$(echo hello4)))"
hello1-hello2-hello3-hello4

這不僅限於bash。它適用於符合POSIX 1003.1(“POSIX shell”)和大多數Bourne派生shell(ksh,ash,dash,bash,zsh等)的所有shell,但不是實際的Bourne shell(即heirloom.sourceforge.net) /sh.html)。 - 克里斯約翰森2010年4月17日3:02

0
votes
answers
20 views
+10

檢查存在

0

我需要檢查,如果通過了wget存在的文件和測試退出代碼檢查存在

現在,我運行以下命令:

wget -q --spider --ftp-user='ftpuser'--ftp-password='ftpassword' ftp://192.168.1.63/fileexists.txt 
echo $? #0 

及其return code is 0

但在情況下,文件不存在

wget -q --spider --ftp-user='ftpuser'--ftp-password='ftpassword' ftp://192.168.1.63/filenotexist.txt 
echo $? #0 

return code is equal 0,即使沒有

所以,我已經試過沒有--spider選項和我8退出代碼,這意味着該文件不存在

但是,如果有一個wget實際下載它。 問題是如果我有一個大文件'檢查'。

任何想法?

謝謝

+0

使用'wget'以外的其他工具。有一個叫做lftp的文件,可以合理編寫腳本。 – tripleee

+0

-bash:lftp:找不到命令 ...不能使用它 –

+0

什麼版本的'wget'是這個? – Joe

沙发
0
1

如何使用捲曲?

curl -I --silent ftp://username:[email protected]/filenotexist.txt >/dev/null 

$?如果文件存在,則爲0, $?如果文件不存在,則不是0。

+0

我喜歡wget,但是我要使用這個替代品 謝謝 –

0
votes
answers
21 views
+10

echo&if else .hql(HIVE)文件

0

如何執行echo & if .hql(hive)文件中的其他內容?echo&if else .hql(HIVE)文件

我能夠執行回聲像下面:

!echo "test"; 

但是不能像下面的執行「的if-else」。

!if [ 1=1] then 
!echo "if is working" 
!else 
!echo "not working" 
!fi 

謝謝

沙发
0
0

我覺得!可以在單個命令中使用。運行多行腳本將其放入shell腳本並運行。

Unix_prompt$ cat script.sh 
if [ 1=1 ] 
then 
echo "if is working" 
else 
echo "not working" 
fi 


HIVE> !./script.sh; 
+0

我也這麼做了。不是將它作爲單個命令運行。我將代碼保存在一個shell腳本中,並以擴展名.hql文件保存並運行。但是if-else沒有運氣。 – Data

+1

我覺得問題不在於'HIVE'。你的if條件在'']'之前沒有空格',那麼'then'應該在下一行。 –

0
votes
answers
13 views
+10

在bash中顯示術語時終止終端應用程序

0

是否可以編寫運行python應用程序的bash腳本,直到應用程序的輸出顯示指定的術語並關閉應用程序爲止?在bash中顯示術語時終止終端應用程序

編輯: 輸出看起來是這樣的:

2017-11-11 14:21:27 LOG: 192.168.0.1 - Administrator:Test54 
2017-11-11 14:21:28 LOG: 192.168.0.1 - Administrator:Test55 
2017-11-11 14:21:29 LOG: 192.168.0.1 - Administrator:Test56 
2017-11-11 14:21:30 LOG: 192.168.0.1 - Administrator:Test57 
2017-11-11 14:21:31 LOG: 192.168.0.1 - Administrator:Test58 
2017-11-11 14:21:32 LOG: 192.168.0.1 - Administrator:Test59 
2017-11-11 14:21:33 LOG: 192.168.0.1 - Administrator:Test60 
2017-11-11 14:21:34 LOG: 192.168.0.1 - Administrator:Test61 
2017-11-11 14:21:35 LOG: 192.168.0.1 - Administrator:Test62 
2017-11-11 14:21:35 SUCCESS : 192.168.0.1 - Administrator:Test62 

應該關閉成功後顯示出來。

沙发
0
2

比方說,你有以下Python程序:

stdbuf -oL python a.py | sed '/2500/q' 

注意,這有點髒,可能會:

#!/usr/bin/python 
for i in range(0,5001): 
    print(i) 

當輸出2500這樣你可以終止它收到一個

Traceback (most recent call last): 
    File "a.py", line 2, in <module> 
    print(i) 
BrokenPipeError: [Errno 32] Broken pipe 

但是如果你想反正殺程序...


重要提示:

這不會如果腳本寫入標準錯誤(如logging.StreamHandler的默認操作),如果它的工作,或處理BrokenPipeError。前者可以通過|輕鬆修復&重定向,但在後一種情況下,kill -INT會更好。

+0

不殺死程序,我編輯了我的問題。 –

+0

'程序| sed'/ SUCCESS/q''應該可以工作。 – hek2mgl

+1

注意,如果腳本寫入'stderr'(如['logging.StreamHandler'](https://docs.python.org/3/library/logging.handlers.html#logging.StreamHandler)默認情況下),或者它正在處理'BrokenPipeError'。前者可以很容易地用'|&'重定向修復,但在後一種情況下,'kill -INT'會更好。 – randomir

203
votes
answers
20 views
+10

Fastest possible grep

I'd like to know if there is any tip to make grep as fast as possible. I have a rather large base of text files to search in the quickest possible way. I've made them all lowercase, so that I could get rid of -i option. This makes the search much faster.

Also, I've found out that -F and -P modes are quicker than the default one. I use the former when the search string is not a regular expression (just plain text), the latter if regex is involved.

Does anyone have any experience in speeding up grep? Maybe compile it from scratch with some particular flag (I'm on Linux CentOS), organize the files in a certain fashion or maybe make the search parallel in some way?

up vote 104 down vote accepted favorite
沙发
+1040
+50

Try with GNU parallel, which includes an example of how to use it with grep:

grep -r greps recursively through directories. On multicore CPUs GNU parallel can often speed this up.

find . -type f | parallel -k -j150% -n 1000 -m grep -H -n STRING {}

This will run 1.5 job per core, and give 1000 arguments to grep.

For big files, it can split it the input in several chunks with the --pipe and --block arguments:

 parallel --pipe --block 2M grep foo < bigfile

You could also run it on several different machines through SSH (ssh-agent needed to avoid passwords):

parallel --pipe --sshlogin server.example.com,server2.example.net grep foo < bigfile

@shelter這是一種無用的貓。

使用--color =總是保留grep顏色(每當你在管道中使用grep時也是如此) - Jim於2014年2月21日15:38

如果find具有-print0謂詞(大多數情況下),則最好使用find。-type f -print0 | 並行-0 -k .... 我的man(1)parallel的實例實際上是這樣說的。另外,我懷疑使用globstar,如果你追求特定的文件模式,你可以更快地做到這一點:shopt -s globstar; parallel -k -j150%-n 1000 -m fgrep -H -n STRING ::: ** / * .c - kojiro 3月26日'14在13:27

@WilliamPursell如果你想讓sudo訪問bigfile,它是一個有用的貓 - Jayen Mar 9 '15 at 7:00

為什麼每個核心設置1.5個工作?為什麼不是每個核心1個工作? - JohnGalt 2016年4月18日10:21

+700

If you're searching very large files, then setting your locale can really help.

GNU grep goes a lot faster in the C locale than with UTF-8.

export LC_ALL=C

令人印象深刻,看起來這條單線提供2倍的速度。 - Fedir RYKHTIK 2013年7月8日13:41

有人可以解釋為什麼會這樣嗎? - Robert E Mealey 2014年12月18日21:12

“簡單字節比較與多字節字符比較”<說我的老闆......右右 - Robert E Mealey 14年12月18日21:19

所以這不是完全安全的,特別是如果你是模式匹配(而不僅僅是字符串匹配)或者你的文件內容不是ascii。仍然值得在某些情況下做,但要謹慎。 - Robert E Mealey 2014年12月18日21:44

@RobertEMealey他說“單身”而不是“簡單”嗎? - Elijah Lynn 17年7月11日在1:33

+120

Ripgrep claims to now be the fastest.

https://github.com/BurntSushi/ripgrep

Also includes parallelism by default

 -j, --threads ARG
              The number of threads to use.  Defaults to the number of logical CPUs (capped at 6).  [default: 0]

From the README

It is built on top of Rust's regex engine. Rust's regex engine uses finite automata, SIMD and aggressive literal optimizations to make searching very fast.

這非常快! - 擊敗17年12月20日14:33

+50

Apparently using --mmap can help on some systems:

http://lists.freebsd.org/pipermail/freebsd-current/2010-August/019310.html

+40

Not strictly a code improvement but something I found helpful after running grep on 2+ million files.

I moved the operation onto a cheap SSD drive (120GB). At about $100, it's an affordable option if you are crunching lots of files regularly.

+30

If you don't care about which files contains the string, you might want to separate reading and grepping into two jobs, since it might be costly to spawn grep many times – once for each small file.

  1. If you've one very large file:

    parallel -j100% --pipepart --block 100M -a <very large SEEKABLE file> grep <...>

  2. Many small compressed files (sorted by inode)

    ls -i | sort -n | cut -d' ' -f2 | fgrep .gz | parallel -j80% --group "gzcat {}" | parallel -j50% --pipe --round-robin -u -N1000 grep <..>

I usually compress my files with lz4 for maximum throughput.

  1. If you want just the filename with the match:

    ls -i | sort -n | cut -d' ' -f2 | fgrep .gz | parallel -j100% --group "gzcat {} | grep -lq <..> && echo {}

+20

Building on the response by Sandro I looked at the reference he provided here and played around with BSD grep vs. GNU grep. My quick benchmark results showed: GNU grep is way, way faster.

So my recommendation to the original question "fastest possible grep": Make sure you are using GNU grep rather than BSD grep (which is the default on MacOS for example).

我在13英寸MacBook Pro上顯示BSD Grep比在8 MB,6核Linode上顯示更快,同時搜索250 MB .sql轉儲文件.6 s vs 25 s - AnthumChris 2015年2月25日19:25

+20

I personally use the ag (silver searcher) instead of grep and it's way faster, also you can combine it with parallel and pipe block.

https://github.com/ggreer/the_silver_searcher

Update: I now use https://github.com/BurntSushi/ripgrep which is faster than ag depending on your use case.

我在這裡發現了一個錯誤。有時它不會在樹中深入,我有情況,grep顯示結果,但ag沒有。我不能在速度準確性上妥協。 - username_4567 2016年5月25日10:04

您應該在他們的github帳戶上打開一個問題並報告它(我會這樣做,但我不能複制它),因為直到現在我都沒有發現任何不准確之處。他們肯定會對此進行排序,是的,你說得對,我完全同意:準確性第一。 - Jinxmcg 2016年5月25日10:09

+10

One thing I've found faster for using grep to search (especially for changing patterns) in a single big file is to use split + grep + xargs with it's parallel flag. For instance:

Having a file of ids you want to search for in a big file called my_ids.txt Name of bigfile bigfile.txt

Use split to split the file into parts:

# Use split to split the file into x number of files, consider your big file
# size and try to stay under 26 split files to keep the filenames 
# easy from split (xa[a-z]), in my example I have 10 million rows in bigfile
split -l 1000000 bigfile.txt
# Produces output files named xa[a-t]

# Now use split files + xargs to iterate and launch parallel greps with output
for id in $(cat my_ids.txt) ; do ls xa* | xargs -n 1 -P 20 grep $id >> matches.txt ; done
# Here you can tune your parallel greps with -P, in my case I am being greedy
# Also be aware that there's no point in allocating more greps than x files

In my case this cut what would have been a 17 hour job into a 1 hour 20 minute job. I'm sure there's some sort of bell curve here on efficiency and obviously going over the available cores won't do you any good but this was a much better solution than any of the above comments for my requirements as stated above. This has an added benefit over the script parallel in using mostly (linux) native tools.

0

cgrep, if it's available, can be orders of magnitude faster than grep.

0

MCE 1.508 includes a dual chunk-level {file, list} wrapper script supporting many C binaries; agrep, grep, egrep, fgrep, and tre-agrep.

https://metacpan.org/source/MARIOROY/MCE-1.509/bin/mce_grep

https://metacpan.org/release/MCE

One does not need to convert to lowercase when wanting -i to run fast. Simply pass --lang=C to mce_grep.

Output order is preserved. The -n and -b output is also correct. Unfortunately, that is not the case for GNU parallel mentioned on this page. I was really hoping for GNU Parallel to work here. In addition, mce_grep does not sub-shell (sh -c /path/to/grep) when calling the binary.

Another alternate is the MCE::Grep module included with MCE.

您需要提供免責聲明,作為該工具的作者。 - FractalSpace 18年1月25日19:16

0

A slight deviation from the original topic: the indexed search command line utilities from the googlecodesearch project are way faster than grep: https://github.com/google/codesearch:

Once you compile it (the golang package is needed), you can index a folder with:

# index current folder
cindex .

The index will be created under ~/.csearchindex

Now you can search:

# search folders previously indexed with cindex
csearch eggs

I'm still piping the results through grep to get colorized matches.

0
votes
answers
19 views
+10

Bash:從匹配變量的文件中刪除一行

0

我有四個名爲source,correct,wrong和not_found的文件。我試圖在bash中編寫腳本,其中我從源文件中讀取每行,將行存儲爲變量x,並將其與條件匹配。Bash:從匹配變量的文件中刪除一行

如果它通過,那麼我需要將該行寫入文件名爲correct,但是catch在寫入到正確前我需要檢查變量x當前是否存在於名爲wrong的文件中,如果是,請刪除它,然後將該行添加到名爲correct的文件。

下面我都試過了,但它不修改該文件並沒有給我任何輸出:

sed -i '/$x/d' ./wrong 
+0

我想你的意思是'sed',而不是'send'。如果你使用單引號,就不會有擴展,所以你實際上是在尋找'$ x'(字面上),而不是那個變量的內容。改爲使用雙引號:'sed -i「/ $ x/d」。/ wrong「 –

+0

感謝您指出錯誤。這實際上是我的iPad做自動更正。我現在糾正了錯誤。 –

+0

'sed'不修改文件:從標準輸入中讀取並在標準輸出上寫入。您需要將結果放在某處,然後替換原始文件。如:'sed -i'/ $ x/d'./wrong> ./wrong.new && mv ./wrong ./wrong.old && mv ./wrong.new。/ wrong'(如果您想保留舊的副本)。 – fernand0

沙发
0
0

正如你已經明白了,裏面'...'變量沒有擴大。

如果換成單引號用雙引號, 這將從./wrong刪除匹配行:

sed -i "/$x/d" ./wrong 

但你也想將行添加到./correct,如果有一個匹配。 要做到這一點,你可以在sed之前運行grep

grep "$x" ./wrong >> ./correct 

這將產生預期的效果,但 將sed覆蓋./wrong,即使它並不需要。 你可以防止像這樣:

if grep "$x" ./wrong >> ./correct; then 
    sed -i "/$x/d" ./wrong 
fi 
+0

感謝Janos,我實際上需要刪除包含x的行和從./wrong後面的下兩行。另外, –

+0

感謝Janos,我實際上需要刪除包含x的行以及從./wrong後面的下兩行。所以,我在做如果grep「$ x」./wrong;then ;sed -i「/ $ x /,+ 2d」./wrong;echo $ x >> ./正確; echo line2 >> ./正確: fi我正在運行一個單獨的for循環之前,如果循環保持行2和行3。我是否想從錯誤中刪除x的決定取決於line2和line3。因此,我不能直接從錯誤中直接複製包含x和下兩行的行來糾正。 –

+0

@DattarayaHonrao您正在添加不屬於原始問題的重要新細節。問一個新問題會更好,包括你真正需要的所有細節。 – janos