免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
最近访问板块 发新帖
查看: 1450 | 回复: 0
打印 上一主题 下一主题

Conga User Manual [复制链接]

论坛徽章:
0
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2007-09-25 20:34 |只看该作者 |倒序浏览
All you need to know to get Conga up and running
Introduction
Conga ArchitectureConga is an agent/server architecture for remote administration of systems. The agent component is called "ricci", and the server is called "luci". One luci server can communicate with many multiple ricci agents installed on systems. When a system is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA, but in fact, CA integration is not complete in version #1 of conga). Through the UI provided by luci, users can configure and administer storage and cluster behavior on remote systems. Communication between luci and ricci is done via XML.
Luci DescriptionAs stated above, systems to be administered are "added" to a luci server (in the documentation that follows, the term "registered" is also used to mean that a system has been added to a luci server to be administered remotely). This is done by storing the hostname (FQDN) or IP address of the system in the luci database. When a luci server is first installed, the database is empty. It is possible, however, to import part or all of a systems database from an existing luci server when deploying a new luci server. This capability provides a means for replication of a luci server instance, as well as an easier upgrade and testing path.
Every luci server instance has one user at initial installation time. This user is called "admin". Only the admin user may add systems to a luci server. The admin user can also create additional user accounts and determine which users are allowed to access which systems in the luci server database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import systems.
Installation of LuciAfter the luci RPMs are installed, the server can be started with the command "service luci start". The first time the server is started, it is initialized by generating https SSL certificates for that server and an initial password for the admin user is generated as a random value. The admin password can be set any time by running the /usr/sbin/luci_admin application and specifying "password" on the command line. luci_admin can be run before luci is started for the first time to set up an initial password for the admin account. Other utilities available from luci_admin are:

  • backup: This option backs the luci server up to a file.
  • restore: This restores a luci site from a backup file.
  • init: This option regenerates SSL certs.
  • help: Shows usage

Logging InWith the luci service running and an admin password set up, the next step is to log in to the server. Remember to specify https in the browser. Port 8084 is the default port for luci, but this value can be easily changed in /etc/sysconfig/luci.
Typical URL: https://hostname.org:8084/luci
Here is a screen shot of the luci login page.


Figure #1: Login Page
Enter admin as the user name, and then enter the admin password that has been set up in the appropriate field, then click "log in".
UI Organizationluci is set up with three tabs:

  • Homebase: This is where admin tools for adding and deleting systems or users are located. Only admin is allowed access to this tab.
  • Cluster: If any clusters are set up with the luci server, they will show up in a list in this tab. If a user other than admin navigates to the cluster tab, only those clusters that the user has permission to manage show up in the cluster list. The cluster tab provides a means for creating and configuring clusters.
  • Storage: Remote administration of storage is available through this page in the luci site.

Homebase TabThe following figure shows the entry look of the Homebase tab.


Figure #2: Homebase Tab
With no systems registered with a luci server, the Homebase page provides three initial utilities to the admin:

  • Add a System
  • Add an Existing Cluster
  • Add a User
After systems have been added to a luci server, the Manage Systems link become available in the navigation table.
After users have been added to a luci server, the following links become available in the navigation table:

  • User Permissions
  • Delete User

Add a System:Adding a single system to luci makes the system available for remote storage administration. In addition to storage administration, Conga provides remote package retrieval and installation, the chkconfig function, full remote cluster administration, and module support to filter and retrieve log entries.
To add a system, click on the Add a System link in the left hand navigation table. This will load the following page:

Figure #3: Add a System
The fully qualified domain name OR IP address of the system is entered in the System Hostname field. The root password for the system is entered in the adjacent field. As a convenience for adding multiple systems at once, and Add Another Entry button is provided. When this button is clicked and at least one additional entry row has been provided, a checkbox is also made available that can be selected if all systems specified for addition to the luci server share the same password.

Figure #4: Multiple System Entries
If the System Hostname is left blank for any row, it is disregarded when the list of systems is submitted for addition. If the user wishes to delete a row for any reason, the icon at the far right of the row (that resembles rows in a table with an 'x') can be clicked. If systems in the list of rows do NOT share the same password (and the checkbox is, of course, left unchecked) and one or more passwords are incorrect, an error message is generated for each system that has an incorrect password. The systems listed with correct passwords are added to the luci server. In addition to incorrect password problems, an error message is also displayed if luci is unable to connect to the ricci agent on a system.
For most typical datacenter deployments of conga, the luci server will reside on a system within the confines of the datacenter network, and the datacenter systems can pretty safely be assumed to be trustworthy. If a luci server is used to connect to systems across the open Internet, the user could be vulnerable to a form of security attack known as a 'Man in the Middle' attack; wherein a hostile party that sits between the client and server intercepts data exchanged, while masquerading to its peers as a legitimate party, and issues potentially malicious commands.
If the user would like to verify the certificate of a ricci agent before authenticating to it (avoiding a 'Man in the Middle' attack), the checkbox marked Verify system certificates before sending any passwords should be checked. With this box checked, clicking submit retrieves the certificate information for all systems listed, and provides a 'Trust' checkbox for each system. The password for a system will not be sent without the trust box checked. To add the system or systems, click the 'Trust' checkboxes for each row desired and click submit again. Mousing over the lock icon for a row entry will display the certificate information for just that system. It is important to note that in order to defend against this type of attack, the user must know the certificate fingerprints of client systems prior to the initial key exchange. When the client systems are added, the user can then compare the known certificate fingerprints with the fingerprints displayed by the luci server to verify they match. A mismatch indicates the possibility of an attack.

Figure #5: Certificate Verification Page
Finally, if a system is entered on the form for addition and it is ALREADY being managed by the luci server, the system is not added again (but, the administrator is informed via an error message).
Add an Existing Cluster:This page looks much like the Add a System page, except that only one system may be listed. Any node in the cluster may be used for this entry. Luci will contact the specified system and attempt to authenticate with the password provided. If successful, the complete list of cluster nodes will be returned, and a table will be populated with the node names and an adjacent password field for each node. The initial node that was entered appears in the list with its password field marked as "authenticated". There is a convenience checkbox if all nodes share the same password. NOTE: At this point, no cluster nodes have been added to luci - not even the initial node used to retrieve the cluster node list that successfully authenticated. The cluster and subsequent nodes are only added after the entire list has been submitted with the Submit button, and all nodes authenticate.
If any nodes fail to authenticate, they appear in the list in red font, so that the password can be corrected and the node list submitted again. Luci allows an existing cluster to be added even if one or more nodes is down, unreachable or otherwise non-operational. If any nodes are not authenticated at the time their cluster is added, they must be authenticated when possible, via the "Reauthenticate to Storage or Cluster Systems" form in the "Manage Systems" section located in the "Homebase" tab.
When a cluster is added to a luci server, all nodes are also added as general systems so that storage may be managed on them. If this is not desired, the individual systems may be removed from luci, while remote cluster management capability is maintained.
NOTE: If an administrator desires to create a new cluster, this capability is available on the Cluster tab. This task link is only for adding and managing clusters that already exist.
Add a User: Here the admin may add additional user accounts. The user name is entered along with an initial password.

Figure #5: Add a User
As stated above, after systems have been added to a luci server, an additional Manage Systems link appears in the navigation table. The Manage Systems page provides a way to delete systems if desired.
When an administrator adds a new user to a luci server, two additional links appear in the Navigation Table: A Delete User link, and a User Permissions link. The Delete User link is self explanatory, and this page lists all users other than the admin, in a dropdown menu. Selecting a user name and then clicking the Delete This User button removes that user account from the luci server.
The User Permissions page is where an administrator grants privileges to user accounts. A dropdown menu lists all current users, followed by a list of all systems registered with the luci server. By selecting a user from the dropdown, the context is set for the page, and then those systems that the admin wishes to allow the user to administer are checked. Finally, the Update Permissions button is clicked to persist the privileges. By default, a user has no user permissions upon creation.

Figure #6: User Permissions Page
Cluster TabWhen the cluster tab is selected, luci first checks the identity of the user and compiles a list of clusters that the current user is permitted to administer. If the current user is not permitted to access any of the clusters registered on the luci server, they are informed accordingly. If the current user is the admin, then all clusters are accessible.
Selecting the Cluster tab causes a page to be displayed that lists all registered clusters on the luci server that are accessible by the current user. Each cluster is identified by name and presents a link to the properties page for that cluster. In addition, the health of the cluster can be quickly assessed - green indicates good health, and red indicates a problem.
The nodes of the cluster are also listed, with health indicated by font color. Green means healthy and part of the cluster; red means not part of the cluster, and gray means that the node is not responding and in an unknown state.
The Cluster List page offers some additional summary information about each cluster. It displays quorum status and the total cluster votes. A dropdown menu allows a cluster to be started, stopped, or restarted. Finally, services for the cluster are listed as links, with their health indicated by font color.
On the left side of every cluster tab page is a navigation table with three links: Cluster List, Create, and Configure. The default page is the Cluster List page. The Create page is for creating a new cluster. Selecting the Configure link displays a short list of clusters in the navigation table. Choosing a cluster name takes the user to the properties page for that cluster (the cluster name link on the Cluster List page performs the same action).

Figure #7: Cluster List Page
You can select a cluster via the main cluster tab navigation table or by clicking the link that is the name of a cluster on the cluster list page. Selecting a cluster associates context with the Cluster tab for the cluster selected. Selecting a cluster causes a cluster-specific navigation table to be displayed below the clusters table (on the left side of the page). The cluster specific table identifies the cluster name at the top of the table and presents links to the five configuration categories for clusters.
NOTE: Until a specific cluster is selected, the cluster pages have no specific cluster context associated with them. Once a cluster has been selected, however, the links and options available on the lower cluster navigation table pertains to the selected cluster. As the upper cluster navigation table is always available, the cluster context can be changed at any time by selecting a different cluster from the list available under the cluster configure options in the main navigation table, or by returning to the top level Cluster List page and selecting a the link that is the name of the desired cluster. (You can easily return to the cluster list page in one of three ways: by clicking on the Cluster tab, selecting the Cluster List link in the main navigation table, or selecting the Configure link from the main navigation table.) The configuration categories available in the lower cluster-specific navigation table are as follows:

  • Nodes
  • Services
  • Resources
  • Failover Domains
  • Shared Fence Devices
Selecting any of these primary configuration links offers a similar set of options for each configuration category, by doing the following:

  • A list is presented of the corresponding configurable cluster elements. For example, if Nodes is selected, a list of all nodes in the cluster is displayed with general node tasks and quick links to node-related configuration pages. The following figure shows a typical node list. Note that this is a high-level view of each node, and is useful for quickly assessing the health of the node and checking which cluster services are currently deployed on a node.
  • A sub-menu is offered for each configuration category. Options in this submenu are:

    • Create or Add
    • Configure; which also displays a list of the individual configuration elements which are direct links to the detailed configuration page

    In summary, after a cluster has been selected, the general cluster properties page is displayed, and a new navigation table is rendered with links for each of the five cluster configuration categories. Selecting a category link displays list of those elements with a high level diagnostic view and links to more detailed aspects of the elements, a link to create a new element, and a sub-menu list of direct links to the detailed configuration properties page for each element currently configured. This "drill-down" pattern, wherein a top level list of elements is displayed with links to properties pages for each element, paired with a way to create a new element, is repeated throughout the luci Cluster UI.

    Figure #8: Cluster Properties Page - Note name of cluster at the top of the page, and in the Title section of the lower navigation table
    NodesSelecting Nodes from the lower Navigation Table displays a list of nodes in the current cluster, along with some helpful links to services running on that node, fencing for the node, and even a link that displays recent log activity for the node in a new browser window. A dropdown menu allows administrators of the cluster a way to have the node join or leave the cluster. The node can also be fenced, rebooted, or deleted through the options in the dropdown menu.

    Figure #9: Node List Page
    The name of the node is a link to the detailed configuration page for that node, and the color of the font (green or red) reflects a course status check on the health of the node.
    When the Nodes link is chosen in the lower navigation table, the Add a Node and Configure options become visible. The Configure option link has a list of the nodes beneath it, and selecting one of these links is a direct path to the detailed properties page for the node, in the same way that the node name link is on the node list page.
    Add a NodeBelow is a screenshot of the Add a Node page:

    Figure #10: Add a Node Page
    The Add a Node page is similar in look and function to the Add a System page available in the Homebase tab. The system hostname of IP address is entered in the appropriate field along with the password for the system. Multiple nodes may be added at once. The user is offered the chance to verify the certificate for the new node to be added, just as they were when adding a system on the home base tab.
    Two other options are available to the user when adding a node: They are presented with a pair of radio buttons that allow for a choice of either pulling the necessary packages from the configured Yum repository for the system (The very latest packages are always selected with this option), or packages already installed on the system are used. If any packages are missing, an error message is returned and the node is not added.
    The other option available to the user is a checkbox for Shared Storage support. Checking this box will install and initially configure the CLVM (Clustered Logical Volume Management) packages and the GFS clustered file system packages. In a cluster environment, this box will almost always need to be checked. When the submit button is clicked, the following takes place:

    • Contact is made with each future nodes ricci agent. If this contact fails on any listed hostname, the operation is suspended and the user is offered the chance to re-enter the password.
    • After authentication is made on all listed nodes, the proper cluster suite RPMs for that node's architecture are pulled down and installed.
    • After installation, an initial cluster.conf file is propagated to each node.
    • Finally, each future node is rebooted. When the node comes back up, it should join the cluster without error.

    NOTE: Until the node to be added has completed he installation and cluster join operation, any attempts to navigate to the configuration page for that node will result in a "busy signal" graphic that informs the user of what modification is occurring and to try back later when the operation is complete.
    Node Configuration PageSelecting the name link in the node list page, or selecting a nodename in the list below the node Configure link in the lower navigation table takes the user directly to the Node Configuration page. Here is an image of a typical node configuration page:

    Figure #11: Node Configuration Page
    This page is divided into 5 sections.

    • General Node Tasks - The first section on the node configuration page shows general node health and offers a link to view recent log activity on the node in a pop-up browser window, and also offers a dropdown menu of some common tasks to perform on a node. These tasks are:

      • Have node join/leave cluster - depending on he node status, one of these options is offered.
      • Fence Node - The node is fenced by the configured means.
      • Reboot Node
      • Delete Node - when a node is deleted, it is made to leave the cluster, all cluster services are stopped on the node, its cluster.conf file is deleted, and a new cluster.conf file is propagated to the remaining nodes in the cluster with the deleted node removed from the configuration. Note that deleting a node does not remove the installed cluster packages from the node.

    • The next section of the node configuration page is a table showing the status of cluster daemons. In the screenshot above, four cluster daemons are listed. This is for a RHEL 4 cluster. In the RHEL 5 cluster suite, only two daemons are listed.
      Each daemon can be separately started or stopped, and its chkconfig status amended to allow the daemon to be enabled at system startup or not.
    • All services running on the node are listed along with their status in the "Services on this Node" section. Links are offered to each services configuration page.
    • The next section of the node configuration page is a display of Failover Domain Membership. Links are offered to the configuration page for each failover domain that the node has membership in.
    • Finally, the node configuration page's final section is for fence configuration. Two levels of fencing may be configured: A Main fencing method, and a Backup method. The cluster suite attempts to fence the node, if necessary, with the main fencing method first. If this fails, the backup method is employed.
      Each of the two fence levels or methods may employ multiple fence types within them; for example, when power switch fencing is used to fence a node with dual redundant power supplies.

    Storage TabThis tab allows the user to monitor and configure storage on remote systems. It provides a means for configuring disk partitions, logical volumes (clustered as well as single system use), and file system parameters and mount points. The storage tab is useful for setting up shared storage for clusters and offers GFS and GFS2 (depending on OS version) as a file system option.
    When a user selects the storage tab, the main storage page shows a list of systems available to the logged-in user in a navigation table to the left. A small form allows the user to choose a storage unit size that the user would generally prefer to work in. This choice is persisted for the user and can be changed at any time by returning to this page. In addition, the unit type can be changed on specific configuration forms throughout the storage UI. This general choice allows an administrator to avoid difficult decimal representations of storage size (for example, if they know that most of their storage is measured in gigabytes, terabytes, or other more familiar representations).
    A dropdown menu also allows the user to choose if they would rather have devices displayed by path or SCSI ID.
    Finally, this main storage page lists systems that the user is authorized to access, but currently unable to administer due to a problem such as a system is unreachable via the network, or the system has been re-imaged and the luci server admin must re-authenticate with the ricci agent on the system. A reason for the trouble is displayed if it can be determined.
    Only those systems that the user is privileged to administer is shown in the tabs main navigation table. If the user has no permissions on any systems, an appropriate message is displayed.
    General System PageAfter a system is selected to administer, a general properties page is displayed for the system. This page view is divided into three sections:

    • Hard Drives
    • Partitions
    • Volume Groups
    Each of these sections is set up as an expandable tree, with direct links provided to property sheets for specific devices, partitions, etc.



本文来自ChinaUnix博客,如果查看原文请点:http://blog.chinaunix.net/u1/39231/showart_389860.html
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP