Home Php C# Sql C C++ Javascript Python Java Go Android Git Linux Asp.net Django .net Node.js Ios Xcode Cocoa Iphone Mysql Tomcat Mongodb Bash Objective-c Scala Visual-studio Apache Elasticsearch Jar Eclipse Jquery Ruby-on-rails Ruby Rubygems Android-studio Spring Lua Sqlite Emacs Ubuntu Perl Docker Swift Amazon-web-services Svn Html Ajax Xml Java-ee Maven Intellij-idea Rvm Macos Unix Css Ipad Postgresql Css3 Json Windows-server Vue.js Typescript Oracle Hibernate Internet-explorer Github Tensorflow Laravel Symfony Redis Html5 Google-app-engine Nginx Firefox Sqlalchemy Lucene Erlang Flask Vim Solr Webview Facebook Zend-framework Virtualenv Nosql Ide Twitter Safari Flutter Bundle Phonegap Centos Sphinx Actionscript Tornado Register | Login | Edit Tags | New Questions | 繁体 | 简体


10 questions online user: 47

0
votes
answers
28 views
+10

如何檢查啓動了哪個進程sys_open

1

我正在學習操作系統課程,我們在Linux(Red hat 8.0)中工作。我試圖實現一個文件打開,關閉跟蹤器,將爲每個進程保存它打開和關閉的文件的歷史記錄。我期望sys_open,close也接受進程id,並且我可以使用它來訪問啓動調用並更新它的進程的歷史記錄(使sysopen的更新部分,關閉函數)。但是,這些函數不接受pid作爲參數,所以我在關於如何將啓動/關閉文件關聯到啓動它的進程方面有些遺憾。我唯一的猜測是,因爲在任何時候只有一個活動進程,它的元數據必須以某種方式是全局的,但我不知道在哪裏或如何找到它。任何意見,將不勝感激。如何檢查啓動了哪個進程sys_open

141
votes
answers
27 views
+10

Why should eval be avoided in Bash, and what should I use instead?

Time and time again, I see Bash answers on Stack Overflow using eval and the answers get bashed, pun intended, for the use of such an "evil" construct. Why is eval so evil?

If eval can't be used safely, what should I use instead?

up vote 131 down vote accepted favorite
沙发
+1310
+50

There's more to this problem than meets the eye. We'll start with the obvious: eval has the potential to execute "dirty" data. Dirty data is any data that has not been rewritten as safe-for-use-in-situation-XYZ; in our case, it's any string that has not been formatted so as to be safe for evaluation.

Sanitizing data appears easy at first glance. Assuming we're throwing around a list of options, bash already provides a great way to sanitize individual elements, and another way to sanitize the entire array as a single string:

function println
{
    # Send each element as a separate argument, starting with the second element.
    # Arguments to printf:
    #   1 -> "$1
"
    #   2 -> "$2"
    #   3 -> "$3"
    #   4 -> "$4"
    #   etc.

    printf "$1
" "${@:2}"
}

function error
{
    # Send the first element as one argument, and the rest of the elements as a combined argument.
    # Arguments to println:
    #   1 -> 'e[31mError (%d): %se[m'
    #   2 -> "$1"
    #   3 -> "${*:2}"

    println 'e[31mError (%d): %se[m' "$1" "${*:2}"
    exit "$1"
}

# This...
error 1234 Something went wrong.
# And this...
error 1234 'Something went wrong.'
# Result in the same output (as long as $IFS has not been modified).

Now say we want to add an option to redirect output as an argument to println. We could, of course, just redirect the output of println on each call, but for the sake of example, we're not going to do that. We'll need to use eval, since variables can't be used to redirect output.

function println
{
    eval printf "$2
" "${@:3}" $1
}

function error
{
    println '>&2' 'e[31mError (%d): %se[m' "$1" "${*:2}"
    exit $1
}

error 1234 Something went wrong.

Looks good, right? Problem is, eval parses twice the command line (in any shell). On the first pass of parsing one layer of quoting is removed. With quotes removed, some variable content gets executed.

We can fix this by letting the variable expansion take place within the eval. All we have to do is single-quote everything, leaving the double-quotes where they are. One exception: we have to expand the redirection prior to eval, so that has to stay outside of the quotes:

function println
{
    eval 'printf "$2
" "${@:3}"' $1
}

function error
{
    println '&2' 'e[31mError (%d): %se[m' "$1" "${*:2}"
    exit $1
}

error 1234 Something went wrong.

This should work. It's also safe as long as $1 in println is never dirty.

Now hold on just a moment: I use that same unquoted syntax that we used originally with sudo all of the time! Why does it work there, and not here? Why did we have to single-quote everything? sudo is a bit more modern: it knows to enclose in quotes each argument that it receives, though that is an over-simplification. eval simply concatenates everything.

Unfortunately, there is no drop-in replacement for eval that treats arguments like sudo does, as eval is a shell built-in; this is important, as it takes on the environment and scope of the surrounding code when it executes, rather than creating a new stack and scope like a function does.

eval Alternatives

Specific use cases often have viable alternatives to eval. Here's a handy list. command represents what you would normally send to eval; substitute in whatever you please.

No-op

A simple colon in a no-op in bash: :

Create a sub-shell

( command )   # Standard notation

Execute output of a command

Never rely on an external command. You should always be in control of the return value. Put these on their own lines:

$(command)   # Preferred
`command`    # Old: should be avoided, and often considered deprecated

# Nesting:
$(command1 "$(command2)")
`command "`command`"`  # Careful:  only escapes $ and  with old style, and
                         # special case ` results in nesting.

Redirection based on variable

In calling code, map &3 (or anything higher than &2) to your target:

exec 3<&0         # Redirect from stdin
exec 3>&1         # Redirect to stdout
exec 3>&2         # Redirect to stderr
exec 3> /dev/null # Don't save output anywhere
exec 3> file.txt  # Redirect to file
exec 3> "$var"    # Redirect to file stored in $var--only works for files!
exec 3<&0 4>&1    # Input and output!

If it were a one-time call, you wouldn't have to redirect the entire shell:

func arg1 arg2 3>&2

Within the function being called, redirect to &3:

command <&3       # Redirect stdin
command >&3       # Redirect stdout
command 2>&3      # Redirect stderr
command &>&3      # Redirect stdout and stderr
command 2>&1 >&3  # idem, but for older bash versions
command >&3 2>&1  # Redirect stdout to &3, and stderr to stdout: order matters
command <&3 >&4   # Input and output!

Variable indirection

Scenario:

VAR='1 2 3'
REF=VAR

Bad:

eval "echo "$$REF""

Why? If REF contains a double quote, this will break and open the code to exploits. It's possible to sanitize REF, but it's a waste of time when you have this:

echo "${!REF}"

That's right, bash has variable indirection built-in as of version 2. It gets a bit trickier than eval if you want to do something more complex:

# Add to scenario:
VAR_2='4 5 6'

# We could use:
local ref="${REF}_2"
echo "${!ref}"

# Versus the bash < 2 method, which might be simpler to those accustomed to eval:
eval "echo "$${REF}_2""

Regardless, the new method is more intuitive, though it might not seem that way to experienced programmed who are used to eval.

Associative arrays

Associative arrays are implemented intrinsically in bash 4. One caveat: they must be created using declare.

declare -A VAR   # Local
declare -gA VAR  # Global

# Use spaces between parentheses and contents; I've heard reports of subtle bugs
# on some versions when they are omitted having to do with spaces in keys.
declare -A VAR=( ['']='a' [0]='1' ['duck']='quack' )

VAR+=( ['alpha']='beta' [2]=3 )  # Combine arrays

VAR['cow']='moo'  # Set a single element
unset VAR['cow']  # Unset a single element

unset VAR     # Unset an entire array
unset VAR[@]  # Unset an entire array
unset VAR[*]  # Unset each element with a key corresponding to a file in the
              # current directory; if * doesn't expand, unset the entire array

local KEYS=( "${!VAR[@]}" )  # Get all of the keys in VAR

In older versions of bash, you can use variable indirection:

VAR=( )  # This will store our keys.

# Store a value with a simple key.
# You will need to declare it in a global scope to make it global prior to bash 4.
# In bash 4, use the -g option.
declare "VAR_$key"="$value"
VAR+="$key"
# Or, if your version is lacking +=
VAR=( "$VAR[@]" "$key" )

# Recover a simple value.
local var_key="VAR_$key"       # The name of the variable that holds the value
local var_value="${!var_key}"  # The actual value--requires bash 2
# For < bash 2, eval is required for this method.  Safe as long as $key is not dirty.
local var_value="`eval echo -n "$$var_value""

# If you don't need to enumerate the indices quickly, and you're on bash 2+, this
# can be cut down to one line per operation:
declare "VAR_$key"="$value"                         # Store
echo "`var_key="VAR_$key" echo -n "${!var_key}"`"   # Retrieve

# If you're using more complex values, you'll need to hash your keys:
function mkkey
{
    local key="`mkpasswd -5R0 "$1" 00000000`"
    echo -n "${key##*$}"
}

local var_key="VAR_`mkkey "$key"`"
# ...

@tmow啊,所以你真的想要eval功能。如果這是你想要的,那麼你可以使用eval; 請記住,它有很多安全警告。這也表明您的應用程序存在設計缺陷。 - Zenexer 2016年9月7日5:33

ref =“$ {REF} _2”echo“$ {!ref}”示例錯誤,由於bash在執行命令之前替換變量,因此無法正常工作。如果之前確實未定義ref變量,則替換結果將為ref =“VAR_2”echo“”,這就是將要執行的內容。 - Yoory N. 17年12月19日13:37

+100

How to make eval safe

eval can be safely used - but all of its arguments need to be quoted first. Here's how:

This function which will do it for you:

function token_quote {
  local quoted=()
  for token; do
    quoted+=( "$(printf '%q' "$token")" )
  done
  printf '%s
' "${quoted[*]}"
}

Example usage:

Given some untrusted user input:

% input="Trying to hack you; date"

Construct a command to eval:

% cmd=(echo "User gave:" "$input")

Eval it, with seemingly correct quoting:

% eval "$(echo "${cmd[@]}")"
User gave: Trying to hack you
Thu Sep 27 20:41:31 +07 2018

Note you were hacked. date was executed rather than being printed literally.

Instead with token_quote():

% eval "$(token_quote "${cmd[@]}")"
User gave: Trying to hack you; date
%

eval isn't evil - it's just misunderstood :)

-40

What about

ls -la /path/to/foo | grep bar | bash

or

(ls -la /path/to/foo | grep bar) | bash

?

我不確定你要對這些命令做什麼,但如果可能的話你絕對不應該使用它們。 - Zenexer 17年6月6日0:05

這是一個答案,但不是一個很好的答案。不要使用版主標誌,否則會被拒絕。 - Samuel Liew♦18年4月14日13:01

0
votes
answers
36 views
+10

C++ boost ::線程,如何啓動線程內的線程

6

如何在對象內啓動線程?例如,C++ boost ::線程,如何啓動線程內的線程

class ABC 
{ 
public: 
void Start(); 
double x; 
boost::thread m_thread; 
}; 

ABC abc; 
... do something here ... 
... how can I start the thread with Start() function?, ... 
... e.g., abc.m_thread = boost::thread(&abc.Start()); ... 

這樣,以後我可以這樣做,

abc.thread.interrupt(); 
abc.thread.join(); 

感謝。

沙发
0
6

使用boost.bind:

boost::thread(boost::bind(&ABC::Start, abc)); 

你可能需要一個指針(或一個shared_ptr):

boost::thread* m_thread; 
m_thread = new boost::thread(boost::bind(&ABC::Start, abc)); 
+0

謝謝蓋伊,它工作得很好。 – 2607 2012-02-26 23:50:26

板凳
0
15

你既不需要綁定,也不指針。

boost::thread m_thread; 
//... 
m_thread = boost::thread(&ABC::Start, abc); 
+0

+1:你說得對。有一個參數相當於使用綁定的構造函數。我更喜歡綁定,因爲我發現它更具可讀性。還有支持移動線程,我想我喜歡指針,因爲我知道發生了什麼(複製與移動),但希望一切都朝着移動... – 2012-02-27 21:26:59

+0

這應該是接受的答案 – user463035818 2017-06-01 14:08:43

95
votes
answers
29 views
+10

How to convert DOS/Windows newline (CRLF) to Unix newline (LF) in a Bash script?

How can I programmatically (i.e., not using vi) convert DOS/Windows newlines to Unix?

The dos2unix and unix2dos commands are not available on certain systems. How can I emulate these with commands like sed/awk/tr?

沙发
+190

This problem can be solved with standard tools, but there are sufficiently many traps for the unwary that I recommend you install the flip command, which was written over 20 years ago by Rahul Dhesi, the author of zoo. It does an excellent job converting file formats while, for example, avoiding the inadvertant destruction of binary files, which is a little too easy if you just race around altering every CRLF you see...

有沒有辦法以流式方式做到這一點,而無需修改原始文件? - augurar 2013年12月7日22:08

@augurar你可以檢查“類似包”package.debian.org/wheezy/flip - n611x007 2014年8月19日11:12

我只是通過運行帶有錯誤標誌的texxto來體驗破壞我操作系統的一半。如果要在整個文件夾中執行此操作,請特別小心。 - A_P 9月13日'18在13:21

板凳
+140

The solutions posted so far only deal with part of the problem, converting DOS/Windows' CRLF into Unix's LF; the part they're missing is that DOS use CRLF as a line separator, while Unix uses LF as a line terminator. The difference is that a DOS file (usually) won't have anything after the last line in the file, while Unix will. To do the conversion properly, you need to add that final LF (unless the file is zero-length, i.e. has no lines in it at all). My favorite incantation for this (with a little added logic to handle Mac-style CR-separated files, and not molest files that're already in unix format) is a bit of perl:

perl -pe 'if ( s/
?/
/g ) { $f=1 }; if ( $f || ! $m ) { s/([^
])z/$1
/ }; $m=1' PCfile.txt

Note that this sends the Unixified version of the file to stdout. If you want to replace the file with a Unixified version, add perl's -i flag.

RIP我的數據文件。xD - Ludovic Zenohate Lagouardette在2016年1月21日10點53分出了問題

@LudovicZenohateLagouardette它是純文本文件(即csv或tab-demited文本),還是其他什麼?如果它是某種數據庫格式,那麼將其操作就好像它是文本一樣,很可能會破壞其內部結構。 - 戈登戴維森2016年1月23日20:53

一個純文本csv,但我認為這很奇怪。我認為因此而搞砸了。不過不用擔心。我總是收集備份,這甚至不是真正的數據集,只有1gb。真實的是26gb。 - Ludovic Zenohate Lagouardette 2016年1月24日8:02

地板
+130

If you don't have access to dos2unix, but can read this page, then you can copy/paste dos2unix.py from here.

#!/usr/bin/env python
"""
convert dos linefeeds (crlf) to unix (lf)
usage: dos2unix.py <input> <output>
"""
import sys

if len(sys.argv[1:]) != 2:
  sys.exit(__doc__)

content = ''
outsize = 0
with open(sys.argv[1], 'rb') as infile:
  content = infile.read()
with open(sys.argv[2], 'wb') as output:
  for line in content.splitlines():
    outsize += len(line) + 1
    output.write(line + '
')

print("Done. Saved %s bytes." % (len(content)-outsize))

Cross-posted from superuser.

用法具有誤導性。真正的dos2unix默認轉換所有輸入文件。您的用法意味著-n參數。真正的dos2unix是一個從stdin讀取的過濾器,如果沒有給出文件,則寫入stdout。 - jfs 2015年7月6日11:32

4楼
+120

You can use vim programmatically with the option -c {command} :

Dos to Unix:

vim file.txt -c "set ff=unix" -c ":wq"

Unix to dos:

vim file.txt -c "set ff=dos" -c ":wq"

"set ff=unix/dos" means change fileformat (ff) of the file to Unix/DOS end of line format

":wq" means write file to disk and quit the editor (allowing to use the command in a loop)

這似乎是最優雅的解決方案,但缺乏對wq意味著什麼的解釋是不幸的。 - Jorrick Sleijster 2月23日12:23

任何使用vi的人都會知道:wq的含義。對於那些沒有3個字符的人意味著1)打開vi命令區域,2)寫入和3)退出。 - David Newcomb 2月27日10:24

我不知道你可以從CLI交互式地向vim添加命令--Robert Dundon 4月4日13:24

5楼
+80

Super duper easy with PCRE;

As a script, or replace $@ with your files.

#!/usr/bin/env bash
perl -pi -e 's/
/
/g' -- $@

This will overwrite your files in place!

I recommend only doing this with a backup (version control or otherwise)

謝謝!這有效,雖然我正在寫文件名而沒有 - 。我選擇這個解決方案是因為它易於理解並適應我。僅供參考,這是開關的作用:-p假設“while input”循環,-i編輯輸入文件,-e執行以下命令 - Rolf 10年11月11日在12:21

嚴格地說,PCRE是Perl的正則表達式引擎的重新實現,而不是Perl的正則表達式引擎。他們都有這種能力,雖然也有不同之處,儘管這個名字很有意義。 - 2017年10月27日8:24

6楼
+80

To convert a file in place do

dos2unix <filename>

To output converted text to a different file do

dos2unix -n <input-file> <output-file>

It's already installed on Ubuntu and is available on homebrew with brew install dos2unix


I know the question explicitly asks for alternatives to this utility but this is the first google search result for "convert dos to unix line endings".

7楼
+60

An even simpler awk solution w/o a program:

awk -v ORS='
' '1' unix.txt > dos.txt

Technically '1' is your program, b/c awk requires one when given option.

UPDATE: After revisiting this page for the first time in a long time I realized that no one has yet posted an internal solution, so here is one:

while IFS= read -r line;
do printf '%s
' "${line%$'
'}";
done < dos.txt > unix.txt

這很方便,但只是要清楚:這會轉換Unix - > Windows / DOS,這與OP要求的方向相反。 - mklement0 2015年2月28日6:01

這是故意的,留給作者練習。eyerolls awk -v RS ='''1'dos.txt> unix.txt - nawK 2015年3月1日4:14

偉大(並為你的教育技巧稱讚)。 - mklement0 2015年3月1日4:35

“b / c awk在給定選項時需要一個。” - awk總是需要一個程序,無論是否指定了選項。 - mklement0 2015年3月1日4:37

純粹的bash解決方案很有趣,但比同等的awk或sed解決方案慢得多。此外,您必須使用IFS = read -r行來忠實地保留輸入行,否則會修剪前導和尾隨空格(或者,在read命令中不使用變量名並使用$ REPLY)。 - mklement0 2015年3月1日6:14

8楼
+40

interestingly in my git-bash on windows sed "" did the trick already:

$ echo -e "abc
" >tst.txt
$ file tst.txt
tst.txt: ASCII text, with CRLF line terminators
$ sed -i "" tst.txt
$ file tst.txt
tst.txt: ASCII text

My guess is that sed ignores them when reading lines from input and always writes unix line endings on output.

9楼
+30

This worked for me

tr "
" "
" < sampledata.csv > sampledata2.csv 

這會將每個DOS換行符轉換為兩個UNIX換行符。 - Melebius 2015年8月4日6:11

10楼
+30

Had just to ponder that same question (on Windows-side, but equally applicable to linux.) Suprisingly nobody mentioned a very much automated way of doing CRLF<->LF conversion for text-files using good old zip -ll option (Info-ZIP):

zip -ll textfiles-lf.zip files-with-crlf-eol.*
unzip textfiles-lf.zip 

NOTE: this would create a zip file preserving the original file names but converting the line endings to LF. Then unzip would extract the files as zip'ed, that is with their original names (but with LF-endings), thus prompting to overwrite the local original files if any.

Relevant excerpt from the zip --help:

zip --help
...
-l   convert LF to CR LF (-ll CR LF to LF)
11楼
+20

TIMTOWTDI!

perl -pe 's/
/
/; s/([^
])z/$1
/ if eof' PCfile.txt

Based on @GordonDavisson

One must consider the possibility of [noeol] ...

12楼
+10

For Mac osx if you have homebrew installed [http://brew.sh/][1]

brew install dos2unix

for csv in *.csv; do dos2unix -c mac ${csv}; done;

Make sure you have made copies of the files, as this command will modify the files in place. The -c mac option makes the switch to be compatible with osx.

dos2unix結果非常方便! - HelloGoodbye 2014年8月21日15:12

這個答案真的不是原始海報的問題。 - hlin117 2015年2月7日17:43

OS X用戶不應該使用-c mac,它用於轉換僅基於OS X CR的新行。您只想將該模式用於Mac OS 9或之前的文件。 - askewchan 2016年4月14日13:20

13楼
+10

You can use awk. Set the record separator (RS) to a regexp that matches all possible newline character, or characters. And set the output record separator (ORS) to the unix-style newline character.

awk 'BEGIN{RS="
|
|
|

";ORS="
"}{print}' windows_or_macos.txt > unix.txt

這是對我有用的那個(MacOS,git diff顯示^ M,在vim中編輯) - Dorian Mar 1 '17 at 9:17

14楼
+10

On Linux it's easy to convert ^M (ctrl-M) to *nix newlines (^J) with sed.

It will something like this on the CLI, there will actually be a line break in the text. However, the passes that ^J along to sed:

sed 's/^M/
/g' < ffmpeg.log > new.log

You get this by using ^V (ctrl-V), ^M (ctrl-M) and (backslash) as you type:

sed 's/^V^M/^V^J/g' < ffmpeg.log > new.log
15楼
0

As an extension to Jonathan Leffler's Unix to DOS solution, to safely convert to DOS when you're unsure of the file's current line endings:

sed '/^M$/! s/$/^M/'

This checks that the line does not already end in CRLF before converting to CRLF.

16楼
0
sed --expression='s/
/
/g'

Since the question mentions sed, this is the most straight forward way to use sed to achieve this. What the expression says is replace all carriage-return and line-feed with just line-feed only. That is what you need when you go from Windows to Unix. I verified it works.

嘿約翰保羅 - 這個答案被標記為刪除,所以出現在我的審查隊列中。一般來說,當你有一個8歲的問題,有22個答案時,你會想要解釋你的答案是如何有用的,而其他現有的答案卻沒有。 - zzxyz 18年8月18日22:34

17楼
0

I made a script based on the accepted answer so you can convert it directly without needing an additional file in the end and removing and renaming afterwards.

convert-crlf-to-lf() {
    file="$1"
    tr -d '15' <"$file" >"$file"2
    rm -rf "$file"
    mv "$file"2 "$file"
}

just make sure if you have a file like "file1.txt" that "file1.txt2" doesn't already exist or it will be overwritten, I use this as a temporary place to store the file in.

18楼
-30

I tried sed 's/^M$//' file.txt on OSX as well as several other methods (http://www.thingy-ma-jig.co.uk/blog/25-11-2010/fixing-dos-line-endings or http://hintsforums.macworld.com/archive/index.php/t-125.html). None worked, the file remained unchanged (btw Ctrl-v Enter was needed to reproduce ^M). In the end I used TextWrangler. Its not strictly command line but it works and it doesn't complain.

19楼
-50

There are plenty of awk/sed/etc answers so as a supplement (since this is one of the top search results for this issue):

You may not have dos2unix but do you have iconv?

iconv -f UTF-16LE -t UTF-8 [filename.txt]
-f from format type
-t to format type

Or all files in a directory:

find . -name "*.sql" -exec iconv -f UTF-16LE -t UTF-8 {} -o ./{} ;

This runs the same command, on all .sql files in the current folder. -o is the output directory so you can have it replace the current files, or, for safety/backup reasons, output to a separate directory.

這嘗試實現從UTF-16LE到UTF-8的編碼轉換,但它不接觸行結尾。它與被問到的問題無關。 - Palec 2010年10月13日13:36

我的錯。我將驗證這一點,但是,我剛剛使用THAT DAY修復了我的文件沒有運行grep的問題,因為它們是Windows格式化的。 - Katastic Voyage 17年10月14日17:34

這也是一個常見的問題,但不是OP所要求的問題(並且比CRLF問題少得多)。 - 2017年10月27日8:22

0
votes
answers
12 views
+10

多線程叉

14

fork()函數可用於複製多線程進程。如果是這樣,所有的線程都將完全相同,如果不是,爲什麼不呢。如果複製不能通過fork來完成,有沒有其他函數可以爲我做?多線程叉

+0

你見過[這個問題](http://stackoverflow.com/questions/1235516/fork-in-multi-threaded-program)?或[這一個](http://stackoverflow.com/questions/1073954/fork-and-existing-threads)?基本上只有'fork()'線程在子進程中存在。你想達到什麼目的? – Zecc 2011-05-19 10:17:21

+0

其實我試圖爲可靠執行創建一個複製進程,其中複製進程將通過執行相同的代碼來驗證主進程的輸出。 – MetallicPriest 2011-05-23 13:20:45

沙发
0
13

分叉後,子中只有一個線程正在運行。這是POSIX標準要求。請參閱the top answer至問題fork and existing threads ?

板凳
0
13

不,孩子只會有一個線程。分叉線程不是微不足道的。 (請參閱此文章Threads and fork(): think twice before mixing them以獲得良好的概述)。

我不知道克隆進程及其所有線程的任何方式,我認爲這在Linux上是不可能的。

+2

+1,非常翔實的博客文章。 – DarkDust 2011-05-19 10:11:29

地板
0
-1

叉創建了自己的線程(S),拷貝文件描述符和虛擬內存的新進程。

子進程不會與他的父親共享相同的內存。所以這是絕對不一樣的。

0
votes
answers
17 views
+10

當父進程被「kill -9」殺死時,子進程是否也會被殺死?

21

今天早上我的一位同事告訴我,當他用「kill -9」殺死supervisord時,supervisord的子進程沒有被殺死。當父進程被「kill -9」殺死時,子進程是否也會被殺死?

他很確定,但我嘗試了很多次,沒有發現這種情況。

所以當一個父進程被「kill -9」殺死的時候,linux會確保它的子進程也被殺死了嗎?

沙发
0
16

您必須使子進程守護進程才能在父親死亡(或死亡)時將其殺死,否則它們將被init(1)採用。

+1

下面是鏈接如果您對如何創建殭屍進程感興趣並嘗試如何處理您的系統,請創建一個殭屍進程: http://www.unix.com/unix-dummies-questions-answers/100737-how- do-you-create-zombie-process.html – Klathzazt 2009-09-29 10:55:00

+4

守護進程,父進程,父進程,子進程和殭屍進程。 電腦講述了這樣一個有趣而異想天開的故事。 – jwarner112 2013-09-30 17:02:39

板凳
0
23

不,當父母死亡時,子進程不一定會被殺死。

但是,如果孩子有一個打開的管道正在寫入並且父母正在讀取,它將在下一次嘗試寫入管道時收到SIGPIPE,對此,默認操作是將其殺死。這通常是在實踐中發生的。

地板
0
8

在UNIX上,父進程和子進程的生存期之間沒有強制關係。嚴格地說,進程只會在調用exit()或者接收一個未處理的信號時終止,而這個信號的默認行爲是終止的。

然而,當用戶在終端上點擊ctrl-C,ctrl-等時,「控制終端」中的整個「前臺進程組」可以接收到像SIGINT和SIGQUIT這樣的信號。具體行爲部分由登錄shell實現(在tty驅動程序的幫助下)。細節可能相當複雜:外觀herehere

4楼
0
-9

你只需要知道你想殺死哪個進程或服務。在我的情況下,httpd是。

killall -9 httpd 

它會殺死httpd的所有子進程。

+0

完全錯過了問題的要點。 – 2012-11-14 02:46:57

+0

這篇文章不回答提出的問題,並且由於其他答案的作用,您可能希望刪除這篇文章。 – 2012-11-14 02:47:50

5楼
0
-1

如果您關閉終端pid,這是進程的父進程ID,則終端將被關閉。當終端關閉時,它的所有進程也會被殺死。但是,如果你在shell中創建一個子shell,那麼如果你創建了任何進程並殺死該進程的ppid,那麼只有該子shell kill和他們的子成爲孤兒。他們的父母成爲init並且pid是1.

[學員@ SIPL?] $ ps -ef | grep睡眠實習生3893 3870 0 10:55 pts/1 00:00:00 sleep 4000實習生3895 3788 0 10:55 pts/0 00:00:00 grep --color =自動睡眠[實習生@ SIPL?] $ kill - 9 3870 [學員@ SIPL?] $ ps -ef | grep睡眠受訓者3893 1 0 10:55 pts/1 00:00:00 sleep 4000受訓者3906 3788 0 10:55 pts/0 00:00:00 grep --color =自動睡眠

0
votes
answers
15 views
+10

Is there a way to change the environment variables of another process in Unix?

On Unix, is there any way that one process can change another's environment variables (assuming they're all being run by the same user)? A general solution would be best, but if not, what about the specific case where one is a child of the other?

Edit: How about via gdb?

0
votes
answers
34 views
+10

當焦油和分裂時使用所有核心

0

我想讓我的tar命令使用所有內核(8),當我打包到如下所示的單個包中時,我得到它的工作:tar -I pigz -cf packed.tar.gz folder/。 它正在工作,它使用所有內核。當焦油和分裂時使用所有核心

但是,當我需要打包成多個文件時,我無法使用它來使用所有內核,這是我的命令:tar cvzf - folder/ | split --bytes=4GB - packed.tar.gz。 如何讓這個命令使用所有核心而不僅僅是一個?

感謝您的所有意見。

沙发
0
2

對於多線程文件壓縮工具pigz:

tar -I pigz -cvf - folder/ | split --bytes=4GB - packed.tar.gz 
0
votes
answers
23 views
+10

Linux不同的分佈命令

-2

嗨,我是Linux初學者,在閱讀Linux時,我發現三種類型的命令做同樣的工作服務,chkconfig,systemctl我知道這些是從分配到分配不同。任何人都可以建議如何學習這些命令的不同分配。Linux不同的分佈命令

尤其是如何記住:)

感謝

+1

歡迎使用[so],請在https://superuser.com/上發佈此問題...請閱讀[mcve],以便弄清楚哪些類型的問題屬於哪裏。這樣你會得到很好的答案。 –

沙发
0
votes
answers
17 views
+10

如果NodeJS API服務失敗,如何重新啓動?

1

我有類似的代碼的NodeJS:如果NodeJS API服務失敗,如何重新啓動?

cluster.js

'use strict'; 

const cluster = require('cluster'); 
var express = require('express'); 
const metricsServer = express(); 
const AggregatorRegistry = require('prom-client').AggregatorRegistry; 
const aggregatorRegistry = new AggregatorRegistry(); 
var os = require('os'); 

if (cluster.isMaster) { 
    for (let i = 0; i < os.cpus().length; i++) { 
     cluster.fork(); 
    } 

    metricsServer.get('/metrics', (req, res) => { 
     aggregatorRegistry.clusterMetrics((err, metrics) => { 
      if (err) console.log(err); 
      res.set('Content-Type', aggregatorRegistry.contentType); 
      res.send(metrics); 
     }); 
    }); 

    metricsServer.listen(3013); 
    console.log(
     'Cluster metrics server listening to 3013, metrics exposed on /metrics' 
    ); 
} else { 
    require('./app.js'); // Here it'll handle all of our API service and it'll run under port 3000 
} 

正如你可以在上面的代碼中看到我使用的是手動的NodeJS聚類方法,而不是PM2集羣,因爲我需要通過Prometheus監控我的API。我通常通過pm2 start cluster.js啓動cluster.js,但由於某些數據庫連接,我們的app.js服務失敗,但是cluster.js沒有。它顯然看起來像我沒有處理數據庫連接錯誤,即使我沒有處理它。我想知道,

  • 我該如何確保我的app.js和cluster.js總是重新啓動,如果它崩潰?

  • 是否有Linux crontab可以放置檢查某些端口始終運行(即3000和3013)? (如果這是一個好主意,我很感激,如果你能提供給我的代碼,我沒有太多熟悉Linux)

  • 或者,我可以部署其他的NodeJS API檢查某些服務正在運行,但因爲我的API實時並捕獲一定量的負載;做這件事我不高興嗎?

任何幫助將不勝感激,提前致謝。

沙发
0
0

我最近發現,如果它死了/關閉,我們可以聽工人事件,並相應地重新啓動它。

下面是代碼:

'use strict'; 

const cluster = require('cluster'); 
var express = require('express'); 
const metricsServer = express(); 
var os = require('os'); 

if (cluster.isMaster) { 
for (let i = 0; i < os.cpus().length; i++) { 
    cluster.fork(); 
} 

cluster.on(
     "exit", 
     function handleExit(worker, code, signal) { 

      console.log( "Worker has died.", worker.process.pid); 
      console.log( "Death was suicide:", worker.exitedAfterDisconnect); 

      // If a Worker was terminated accidentally (such as by an uncaught 
      // exception), then we can try to restart it. 
      if (! worker.exitedAfterDisconnect) { 

       var worker = cluster.fork(); 
       // CAUTION: If the Worker dies immediately, perhaps due to a bug in the 
       // code, you can run [from what I have READ] into rapid CPU consumption 
       // as Master continually tries to create new Workers. 

      } 

     } 
    ); 

} else { 
require('./app.js'); 
}