免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12
最近访问板块 发新帖
楼主: lyxmoo
打印 上一主题 下一主题

黄健翔版批判Solaris, 别迷信, SunOS 一样有许多bug. [复制链接]

论坛徽章:
7
荣誉会员
日期:2011-11-23 16:44:17水瓶座
日期:2013-08-28 21:20:16丑牛
日期:2013-10-02 21:01:462015年迎新春徽章
日期:2015-03-04 09:54:45操作系统版块每日发帖之星
日期:2016-06-05 06:20:0015-16赛季CBA联赛之吉林
日期:2016-06-20 08:24:0515-16赛季CBA联赛之四川
日期:2016-08-18 15:02:02
11 [报告]
发表于 2007-02-27 10:23 |只看该作者
开源对安全确实不好,我认为。

论坛徽章:
0
12 [报告]
发表于 2007-02-28 09:35 |只看该作者
都是牛人啊,还差得太远。。。。

论坛徽章:
0
13 [报告]
发表于 2007-03-01 19:15 |只看该作者
原帖由 briangao 于 2007-2-26 22:52 发表

Well, the Solaris is open source now. If you don't like the way it is, go ahead and contribute. Post your frustration in the forum doesn't make Solaris better or worse.



gdb ../../bin/suckerd /var/core/coresysprb_suckerd-2-8796
GNU gdb 6.0
Copyright 2003 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "sparc-sun-solaris2.8"...

warning: core file may not match specified executable file.
Core was generated by `/opt/bin/sysprb_suckerd-2'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /usr/lib/libnsl.so.1...done.
Loaded symbols for /usr/lib/libnsl.so.1
Reading symbols from /usr/lib/libkstat.so.1...done.
Loaded symbols for /usr/lib/libkstat.so.1
Reading symbols from /usr/lib/libsocket.so.1...done.
Loaded symbols for /usr/lib/libsocket.so.1
Reading symbols from /usr/lib/libpthread.so.1...done.
Loaded symbols for /usr/lib/libpthread.so.1
Reading symbols from /usr/lib/librt.so.1...done.
Loaded symbols for /usr/lib/librt.so.1
Reading symbols from /opt/lib/mysql/libmysqlclient.so.14...done.
Loaded symbols for /opt/lib/mysql/libmysqlclient.so.14
Reading symbols from /usr/lib/libm.so.1...done.
Loaded symbols for /usr/lib/libm.so.1
Reading symbols from /usr/lib/libz.so...done.
Loaded symbols for /usr/lib/libz.so
Reading symbols from /opt/lib/libnetsnmp.so.15...done.
Loaded symbols for /opt/lib/libnetsnmp.so.15
Reading symbols from /opt/lib/librrd.so.2...done.
Loaded symbols for /opt/lib/librrd.so.2
Reading symbols from /usr/lib/libxml2.so.2...done.
Loaded symbols for /usr/lib/libxml2.so.2
Reading symbols from /opt/oracle/product/9.2.0.1/lib32/libclntsh.so.9.0...done.
Loaded symbols for /opt/oracle/product/9.2.0.1/lib32/libclntsh.so.9.0
Reading symbols from /usr/lib/libc.so.1...done.
Loaded symbols for /usr/lib/libc.so.1
Reading symbols from /usr/lib/libdl.so.1...done.
Loaded symbols for /usr/lib/libdl.so.1
Reading symbols from /usr/lib/libmp.so.2...done.
Loaded symbols for /usr/lib/libmp.so.2
Reading symbols from /usr/lib/libaio.so.1...done.
Loaded symbols for /usr/lib/libaio.so.1
Reading symbols from /usr/lib/libcrypt_i.so.1...done.
Loaded symbols for /usr/lib/libcrypt_i.so.1
Reading symbols from /usr/lib/libgen.so.1...done.
Loaded symbols for /usr/lib/libgen.so.1
Reading symbols from /usr/local/ssl/lib/libssl.so.0.9.7...done.
Loaded symbols for /usr/local/ssl/lib/libssl.so.0.9.7
Reading symbols from /usr/local/ssl/lib/libcrypto.so.0.9.7...done.
Loaded symbols for /usr/local/ssl/lib/libcrypto.so.0.9.7
Reading symbols from /usr/local/lib/libgcc_s.so.1...done.
Loaded symbols for /usr/local/lib/libgcc_s.so.1
Reading symbols from /opt/lib/libfreetype.so.6...done.
Loaded symbols for /opt/lib/libfreetype.so.6
Reading symbols from /opt/lib/libpng.so.3...done.
Loaded symbols for /opt/lib/libpng.so.3
Reading symbols from /opt/lib/libart_lgpl_2.so.2...done.
Loaded symbols for /opt/lib/libart_lgpl_2.so.2
Reading symbols from /usr/local/lib/libiconv.so.2...done.
Loaded symbols for /usr/local/lib/libiconv.so.2
Reading symbols from /opt/oracle/product/9.2.0.1/lib32/libwtc9.so...done.
Loaded symbols for /opt/oracle/product/9.2.0.1/lib32/libwtc9.so
Reading symbols from /usr/lib/libsched.so.1...done.
Loaded symbols for /usr/lib/libsched.so.1
Reading symbols from /usr/platform/SUNW,Sun-Fire-V240/lib/libc_psr.so.1...done.
Loaded symbols for /usr/platform/SUNW,Sun-Fire-V240/lib/libc_psr.so.1
Reading symbols from /usr/lib/libthread.so.1...done.
Loaded symbols for /usr/lib/libthread.so.1
w#0  0xfe39437c in _insert_nolock () from /usr/lib/libc.so.1
(gdb) where
#0  0xfe39437c in _insert_nolock () from /usr/lib/libc.so.1
#1  0xfe39423c in popen () from /usr/lib/libc.so.1
#2  0x00044ae0 in get_value_snmp ()
(gdb) quit


好了,popen 掉到 里面了。

看popen 的代码。那个 _insert_nolock 估计就是 node 越界操作了。

     87 FILE *
     88 popen(const char *cmd, const char *mode)
     89 {
     90         int        p[2];
     91         pid_t        pid;
     92         int        myside;
     93         int        yourside;
     94         int        fd;
     95         const char *shpath;
     96         FILE        *iop;
     97         int        stdio;
     98         node_t        *curr;
     99         char        *argvec[4];
    100         node_t        *node;
    101         posix_spawnattr_t attr;
    102         posix_spawn_file_actions_t fact;
    103         int        error;
    104         static const char *sun_path = "/bin/sh";
    105         static const char *xpg4_path = "/usr/xpg4/bin/sh";
    106         static const char *shell = "sh";
    107         static const char *sh_flg = "-c";
    108
    109         if ((node = lmalloc(sizeof (node_t))) == NULL)
    110                 return (NULL);
    111         if ((error = posix_spawnattr_init(&attr)) != 0) {
    112                 lfree(node, sizeof (node_t));
    113                 errno = error;
    114                 return (NULL);
    115         }
    116         if ((error = posix_spawn_file_actions_init(&fact)) != 0) {
    117                 lfree(node, sizeof (node_t));
    118                 (void) posix_spawnattr_destroy(&attr);
    119                 errno = error;
    120                 return (NULL);
    121         }
    122         if (pipe(p) < 0) {
    123                 error = errno;
    124                 lfree(node, sizeof (node_t));
    125                 (void) posix_spawnattr_destroy(&attr);
    126                 (void) posix_spawn_file_actions_destroy(&fact);
    127                 errno = error;
    128                 return (NULL);
    129         }
    130
    131         shpath = __xpg4? xpg4_path : sun_path;
    132         if (access(shpath, X_OK))        /* XPG4 Requirement: */
    133                 shpath = "";                /* force child to fail immediately */
    134
    135         myside = tst(p[WTR], p[RDR]);
    136         yourside = tst(p[RDR], p[WTR]);
    137         /* myside and yourside reverse roles in child */
    138         stdio = tst(0, 1);
    139
    140         /* This will fail more quickly if we run out of fds */
    141         if ((iop = fdopen(myside, mode)) == NULL) {
    142                 error = errno;
    143                 lfree(node, sizeof (node_t));
    144                 (void) posix_spawnattr_destroy(&attr);
    145                 (void) posix_spawn_file_actions_destroy(&fact);
    146                 (void) close(yourside);
    147                 (void) close(myside);
    148                 errno = error;
    149                 return (NULL);
    150         }
    151
    152         lmutex_lock(&popen_lock);
    153
    154         /* in the child, close all pipes from other popen's */
    155         for (curr = head; curr != NULL && error == 0; curr = curr->next) {
    156                 /*
    157                  * These conditions may apply if a previous iob returned
    158                  * by popen() was closed with fclose() rather than pclose(),
    159                  * or if close(fileno(iob)) was called.
    160                  * Accommodate these programming error.
    161                  */
    162                 if ((fd = curr->fd) != myside && fd != yourside &&
    163                     fcntl(fd, F_GETFD) >= 0)
    164                         error = posix_spawn_file_actions_addclose(&fact, fd);
    165         }
    166         if (error == 0)
    167                 error =  posix_spawn_file_actions_addclose(&fact, myside);
    168         if (yourside != stdio) {
    169                 if (error == 0)
    170                         error = posix_spawn_file_actions_adddup2(&fact,
    171                                 yourside, stdio);
    172                 if (error == 0)
    173                         error = posix_spawn_file_actions_addclose(&fact,
    174                                 yourside);
    175         }
    176         if (error == 0)
    177                 error = posix_spawnattr_setflags(&attr,
    178                     POSIX_SPAWN_NOSIGCHLD_NP | POSIX_SPAWN_WAITPID_NP);
    179         if (error) {
    180                 lmutex_unlock(&popen_lock);
    181                 lfree(node, sizeof (node_t));
    182                 (void) posix_spawnattr_destroy(&attr);
    183                 (void) posix_spawn_file_actions_destroy(&fact);
    184                 (void) fclose(iop);
    185                 (void) close(yourside);
    186                 errno = error;
    187                 return (NULL);
    188         }
    189         argvec[0] = (char *)shell;
    190         argvec[1] = (char *)sh_flg;
    191         argvec[2] = (char *)cmd;
    192         argvec[3] = NULL;
    193         error = posix_spawn(&pid, shpath, &fact, &attr,
    194                 (char *const *)argvec, (char *const *)environ);
    195         (void) posix_spawnattr_destroy(&attr);
    196         (void) posix_spawn_file_actions_destroy(&fact);
    197         (void) close(yourside);
    198         if (error) {
    199                 lmutex_unlock(&popen_lock);
    200                 lfree(node, sizeof (node_t));
    201                 (void) fclose(iop);
    202                 errno = error;
    203                 return (NULL);
    204         }
    205         _insert_nolock(pid, myside, node);
    206
    207         lmutex_unlock(&popen_lock);
    208
    209         _SET_ORIENTATION_BYTE(iop);
    210
    211         return (iop);
    212 }

论坛徽章:
0
14 [报告]
发表于 2007-03-01 20:21 |只看该作者
原帖由 lyxmoo 于 2007-3-1 19:15 发表


Loaded symbols for /usr/lib/libthread.so.1
w#0  0xfe39437c in _insert_nolock () from /usr/lib/libc.so.1
(gdb) where
#0  0xfe39437c in _insert_nolock () from /usr/lib/libc.so.1
#1  0xfe39423c in popen () from /usr/lib/libc.so.1
#2  0x00044ae0 in get_value_snmp ()
(gdb) quit


好了,popen 掉到 里面了。

看popen 的代码。那个 _insert_nolock 估计就是 node 越界操作了。

     87 FILE *
     88 popen(const char *cmd, const char *mode)
     89 {
     90   


从调用栈来看,不足以说明是谁的错误,core dump和panic不是一回事,如果你传给popen的参数是有问题的呢?从coredump 读参数的值不是难事,即便没有gcc -g来编译源代码。

最好是用gdb读出poen的两个参数的值,我们根据入口参数的这个值应该很容易复现这个bug,不但能得到coredump的原因,而且还可以知道是应用程序的问题,还是这个函数的问题。

[ 本帖最后由 Solaris12 于 2007-3-1 20:22 编辑 ]

论坛徽章:
0
15 [报告]
发表于 2007-03-09 12:07 |只看该作者

找到一票老文,立此存照

原文:http://www.itworld.com/Comp/2375/swol-0901-insidesolaris/


The kernel dispatcher and associated subsystems provide for the prioritization and scheduling of kernel threads in one of several bundled scheduling classes. The details of the implementation are covered in a series of past Inside Solaris columns which began in October 1998.  
On this topic


ITworld.com Today. Sign up Now!
ITworld.com Product Spotlight. Sign up Now!



>

Solaris currently ships with two threads libraries: libthread.so, for support of the Solaris threads interfaces, and libpthread.so, the POSIX (Portable Operating System Interface for Unix) threads APIs. User threads are created by a call to either thr_create(3THR) (Solaris threads) or pthread_create(3THR) (POSIX threads). The Solaris threads library was originally introduced in Solaris 2.2. At the time, the POSIX threads specification had not been completed. When the POSIX draft was ready, an implementation of the POSIX threads library was developed and began shipping in Solaris 2.6. Both libraries continue to be bundled with Solaris, but we recommend that any new development use the POSIX interfaces, as new features and functionality are being integrated into the POSIX code but not necessarily into the Solaris threads library.

User threads do not have a notion of scheduling classes, such as the timeshare and realtime classes implemented in the kernel. POSIX threads do provide the notion of several scheduling policies, which can be specified by the programmer as part of a thread's attributes. Attributes, which were introduced by POSIX, allow the programmer to alter the behavior of a user thread or synchronization object, and can be found in both. The creation of POSIX threads and synchronization objects permits the passing of an attribute's structure, which must be initialized along with any changes to specific attributes prior to the create call in order to create the thread or synchronization object.

As of Solaris 8, the supported attributes for a thread are:

contentionscope: PTHREAD_SCOPE_PROCESS or PTHREAD_SCOPE_SYSTEM. Determines if the thread is bound or unbound (more on this below).

detachstate: Determines whether or not to save the thread's state when it terminates, so that it is joinable. That is, another thread in the same process can issue a pthread_join() on the thread ID and collect the thread's exit status.

stackaddr: User-specified thread stack address. By default, the system will determine the stack address based on existing address-space mappings.

stacksize: User-specified stack size. Default is 1 MB for a 32-bit process, 2 MB for a 64-bit process.

priority: A user-specified priority. Default is zero.

policy: The scheduling policy. Default is SCHED_OTHER, meaning that Solaris will provide fixed priority behavior.

guardsize: Specifies protection against stack overflow by placing a guard page (red zone) around the mapped stack pages.

inheritsched: Default value of PTHREAD_INHERIT_SCHED allows new threads to inherit the scheduling policy of the calling thread.

Each of these attributes has a corresponding pair of POSIX APIs for reading (get) and altering (set) a specific attribute. For example, determining or changing the stack size attribute is accomplished using pthread_attr_setstacksize (3THR) and pthread_attr_getstacksize (3THR). The programmer cannot alter or read an attribute through simple structure assignments in code; the appropriate attribute API must be used. As we move through the discussion, we'll talk more about the attributes (priority and policy) directly related to the subject at hand.

The user thread is abstracted as a data structure in the address space of the process that issued the thread create call. A data structure is allocated and initialized for each user thread. The programmer can specify certain thread attributes, such as the thread stack size, stack address, and priority, when the thread is created. The library code will perform validity checks on the passed arguments before allowing the thread create to complete. Note that specifying a stack address, stack size, and priority is optional, and with null values the system will provide defaults. The default thread priority is zero, and the default stack size is 1 MB for a 32-bit process and 2 MB for a 64-bit process. Using Solaris threads, the stack address and stack size are arguments in the thr_create (3THR) call.

For POSIX threads, an attribute's structure can be initialized and have a nondefault values set for stack address, stack size, scheduling policy, and priority. For each of the attributes available, there is a corresponding pair of POSIX APIs to set and get the particular attribute. For example, determining or changing the stack size attribute is accomplished using pthread_attr_setstacksize (3THR) and pthread_attr_getstacksize (3THR). A pointer to the attribute's structure is passed as an argument to the pthread_create (3THR) call.

The fields in the thread structure get populated during the thread create, with the stack pointer and size, thread priority, scheduling policy, and various other fields set prior to the thread executing for the first time. We'll go through the relevant fields as we talk about thread scheduling, priorities, and state changes.

Three factors affect the scheduling of user threads:

The thread's contention scope
The thread's priority
The scheduling policy attribute

The contentionscope attribute can be either process (intraprocess) or system (interprocess). System contentionscope (interprocess) describes a thread bound to an underlying LWP (lightweight process). Bound threads are created by setting the contentionscope attribute to PTHREAD_SCOPE_SYSTEM for POSIX threads or the THR_BOUND flag for Solaris threads. The default for both is to create an unbound thread (PTHREAD_SCOPE_PROCESS attribute for POSIX). A bound thread has an LWP created during the thread create processing, and the user thread is bound (linked) to the created LWP for the thread's lifetime. For bound threads, any thread's library-level priorities or scheduling policies are immaterial. A bound thread always has the execution resource it needs (an LWP) for scheduling by the kernel. Altering the priority of a bound thread involves use of the priocntl(1) command (or of the corresponding system call to do so programmatically) and affects the thread's priority as viewed by the kernel dispatcher.

The user thread's priority and scheduling policy factor into the scheduling of threads within a process (intraprocess) contentionscope (the default). We can view the scheduling of these unbound threads in two phases. First, they must be scheduled from within the library. A thread is scheduled when it's linked to an available LWP; this is the first phase. The second phase involves the kernel dispatcher scheduling the LWP and its associated kernel thread (to which the user thread has been linked by the threads library) onto an available processor.

A dispatch queue of all runnable user threads is maintained at the library level. The dispatch queue in releases up to and including Solaris 8 is an array of dispq structures, with each structure member containing a pointer to the first and last threads on the list. Each array element corresponds to a user thread priority, and threads at the same priority are maintained on a linked list and rooted in the array element that corresponds to the priority. This is shown in the figure below.

Fig 1. Threads library dispatcher queue


There are 128 user thread priorities (0 through 127). As we mentioned, the default user thread priority is 0. Higher priorities are better priorities (as is the case with kernel global priorities), and threads with higher priorities will be scheduled before those with lower priorities. The scheduling of user threads by the library routines involves finding the highest-priority runnable thread and linking it to an available LWP from the pool.

The programmer can provide hints to the library as to the level of concurrency desired, using either thr_setconcurrency (3THR) (Solaris threads) or pthread_setconcurrency (3THR) (POSIX threads). Both calls resolve to the same internal library _thr_setconcurrency() function. Both APIs take an integer value as an argument, which translates internally to the number of LWPs desired by the process for user thread execution. The _thr_setconcurrency() code does validity tests on the passed concurrency value and computes the difference between the desired concurrency and the current number of LWPs in the pool. An internal library variable, _nlwps, maintains a count of LWPs during the execution lifetime of the process. _thr_setconcurrency() then creates additional LWPs based on the computed difference. For instance, if there are three LWPs in the pool and a pthread_setconcurrency(5) call is made (desired concurrency level is five), two additional LWPs will be created.

User-thread scheduling is done in the library through internal library interfaces called at various points in time during the execution of the process -- or, more precisely, by user threads executing within the process. Specifically, the user threads scheduler will be entered when:

A thread blocks in a system call or a library call for a synchronization object (e.g., a mutex lock)
A thread terminates
A thread explicitly yields a CPU (thr_yield(3THR))
A thread is preempted by a higher (better) priority thread becoming runnable

A compute-bound thread that does not enter the kernel via a system call or yield the processor will execute until it completes, never surrendering the LWP it's been linked to when first scheduled by the library. This is an important consideration when developing multithreaded applications and understanding the level of concurrency and execution time for user threads.

Yielding the processor is a voluntary action using the thr_yield(3THR) interface. The internal library code will simply yield the processor if there are no runnable threads on the dispatch queue. Otherwise, several library internal functions are called to remove the thread that issued the yield call from the linked list of ONPROC threads. ONPROC is the thread's state when it's on a processor, and all ONPROC threads are maintained on a linked list in the library. The yielding thread is then placed on the internal dispatch (run) queue, and the library swtch() code is called to find the runnable thread with the highest priority and schedule it.

When a user thread issues a system call or calls one of the library interfaces to acquire a synchronization object, the thread may need to block if the desired synchronization object is not available. Remember, it's being held by another thread. In either case, the thread is temporarily bound to the LWP while blocked. For a system call, the thread enters the kernel and the kernel will put the LWP on a sleep queue while it is blocked, waiting for the system call to complete. The kernel handles the wakeup mechanism when the system call is completed so that the LWP can resume execution.

For a library-level blocking on a synchronization object, the thread is placed on a library-level sleep queue, and its state is changed from ONPROC to SLEEP. During the thread swtch code, the highest-priority runnable thread is located on the internal dispatch queue, and the LWP is passed to the newly-selected thread for execution. Since we're blocking on a synchronization primitive in user land, with no visibility in the kernel, there's no reason to have the LWP block in the kernel. The LWP can be, and is, made available to execute another user thread.

Finally, the thread's library supports thread preemption. When a thread with a higher priority than the current list of threads in the ONPROC state (on-a-processor state, scheduled from the threads library perspective) becomes runnable, a preemption will force a lower-priority thread off the LWP so the higher-priority thread can get the execution resource it needs.

That's a wrap. Next month, we'll continue our discussion of the threads library with a closer look at the internal functions and algorithms for user thread scheduling.
Jim Mauro is an area technology manager for Sun Microsystems in the Northeast, focusing on server systems, clusters, and high availability. He has 18 years of industry experience, working in educational services (he developed and delivered courses on Unix internals and administration) and software consulting.

论坛徽章:
0
16 [报告]
发表于 2007-03-09 20:41 |只看该作者
原帖由 lyxmoo 于 2007-3-9 12:07 发表
原文:http://www.itworld.com/Comp/2375/swol-0901-insidesolaris/


The kernel dispatcher and associated subsystems provide for the prioritization and scheduling of kernel threads in on ...



没仔细看内容,和前面的core dump的内容有关吗?

另外,Solaris 8的线程机制和Solaris 10差别极大,这篇文章有些地方已经不大适合Solaris 10了。

你的coredump是在Solaris8上的吗?

论坛徽章:
0
17 [报告]
发表于 2007-03-09 21:20 |只看该作者
呵呵,都是大牛,插不上嘴,听吧,看吧。
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP