3.1 Setting up the URL Rewrite Option on the Producer. To use JD Edwards EnterpriseOne portlets in the IBM WebSphere Portal v7.0, the ”URL Rewrite” option must be. How i can activate/deactivate the login portlet(or any portlet) on Websphere Portal V6 using xmlaccess or can i do it from admin console? I am sorry if these. ![]() 6.1.0.2: Readme for IBM WebSphere Portal Enable for z/OS 6.1 fix pack 2 (6.1.0.2) - cluster • • • • • • • • • • • What is new with Fix Pack 6.1.0.2 This fix pack updates the IBM WebSphere Portal for z/OS 6.1 (6.1.0.1) level to the 6.1.0.2 service release level. This fix pack and these instructions can be used to upgrade the IBM Web Content Manager for z/OS 6.1 (6.1.0.1) level to the 6.1.0.2 service release level. The following items are included in this fix pack: • Updated underlying supported hardware and software. Refer to for details • Included APARs from many WebSphere Portal components About Fix Pack 6.1.0.2 Installing this Fix Pack raises the fix level of your product to version 6.1.0.2. Important: WebSphere Portal Enable for z/OS Version 6.1.0.2 can be used for performing a fresh installation. Therefore, when installing WebSphere Portal Enable for z/OS Version 6.1.0.2 from scratch, first put the Portal Version 6.1 GA code into SMP/E, then apply the Portal 6.1.0.2 PTFs in SMP/E (UA46968, UA46969, UA46970, UA47000, UA47023, UA47024, UA47025, UA47026, UA47054, UA47077, UA47078, UA47079, UA47080, UA47081, UA47082). Now you can run the Portal installation and configuration as described in the WebSphere Portal Information Center. Space requirements The WebSphere Portal Enable for z/OS Fix pack Version 6.1.0.2 requires 35000 additional Tracks (or 2334 additional Cylinders) in the SMPPTS dataset and 1350 additional Tracks in the portal installation filesystem. The PTFs are listed below: UA46968 UA46969 UA46970 UA47000 UA47023 UA47024 UA47025 UA47026 UA47054 UA47077 UA47078 UA47079 UA47080 UA47081 UA47082 Verify that the free space is available before beginning the installation. For temporarily disk space at least 400 MB spaces available. ![]() Cluster upgrade planning There are two options for performing upgrade in a clustered environment. One option is to upgrade the cluster while the entire cluster has been taken offline from receiving user traffic. The upgrade is performed on every node in the cluster before the cluster is brought back online to receive user traffic. This is the recommended approach for an environment with multiple Portal clusters since 24x7 availability can be maintained. It is also the simplest approach to use in a single cluster environment if maintenance windows allow for the Portal cluster to be taken offline. For single cluster environments, which cannot tolerate the outage required to take a cluster offline and perform the upgrade, you can utilize the single-cluster 24x7 availability process. Review the following requirements and limitations for performing product upgrades while maintaining 24x7 availability in a single cluster (NOTE: Ensure that you understand this information before upgrading your cluster): Assumptions for maintaining 24x7 operation during the upgrade process: • If you want to preserve current user sessions during the upgrade process, make sure that WebSphere Application Server distributed session support is enabled to recover user session information when a cluster node is stopped for maintenance. Alternatively, use monitoring to determine when all (or most) user sessions on a cluster node have completed before stopping the cluster node for upgrade to minimize the disruption to existing user sessions. • Load balancing must be enabled in the clustered environment. • The cluster has at least two horizontal cluster members. • Limitations on 24x7 maintenance: • If you have not implemented horizontal scaling and have implemented only vertical scaling in your environment such that all cluster members reside on the same node, the fix pack installation process will result in a temporary outage for your end users due to a required restart. In this case, you will be unable to upgrade while maintaining 24x7 availability. • If you have a single local Web server in your environment, maintaining 24x7 availability during the cluster upgrade may not be possible since you might be required to stop the Web server while applying corrective service to the local WebSphere Application Server installation. • When installing the fix pack in a clustered environment, the portlets are only deployed when installing the fix pack on the primary node. The fix pack installation on secondary nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet deployment on the primary node, the database will be updated with the portlet configuration. This updated database, which is shared between all nodes, would be available to secondary nodes before the secondary nodes receive the updated portlet binary files. It is possible that the new portlet configuration will not be compatible with the previous portlet binary files, and in a 24x7 production environment problems may arise with anyone attempting to use a portlet that is not compatible with the new portlet configuration. Therefore it is recommended that you test your portlets before upgrading the production system in a 24x7 environment to determine if any portlets will become temporarily unavailable on secondary nodes during the time between the completion of the fix pack installation on the primary node and the installation of the fix pack on the secondary node. • In order to maintain 24x7 operations in a clustered environment, it is required that you stop WebSphere Portal on one node at a time and upgrade it. It is also required that during the upgrade of the primary node, you manually stop node agents on all other cluster nodes that continue to service user requests. Failure to do so may result in portlets being shown as unavailable on nodes having the node agent running. • When uninstalling the fix pack in a clustered environment, the portlets are only redeployed when uninstalling the fix pack on the primary node. The fix pack uninstall on secondary nodes simply synchronizes the node with the deployment manager to receive updated portlets. During the portlet redeployment on the primary node, the database will be updated with the portlet configuration, which would be available to secondary nodes before the secondary nodes receive the updated binary files, since all nodes share the same database. It is recommended that you test your portlets before uninstalling on the production system in a 24x7 environment because the possibility of such incompatibility might arise if the previous portlet configuration is not compatible with the new portlet binary files. Steps for installing Fix Pack 6.1.0.2 (single-cluster 24x7 procedure). Familiarize yourself with the Portal Upgrade Best Practices available from IBM Remote Technical Support for WebSphere Portal.• 1. Perform the following steps before upgrading to Version 6.1.0.2: a. Review the for this cumulative fix. If necessary, upgrade all software before applying this cumulative fix. If updates are required to WebSphere Application Server level, perform that update first on the Deployment Manager. Instructions are also provided to install WebSphere Application Server updates on each node in the cluster during the time that node is taken offline from receiving user traffic. NOTE: You can download the latest WebSphere Application Server interim fixes from. If Deployment Manager was upgraded then • Stop Deployment Manager. • Create a symblink to the product HFS/zFS: ln -s /PortalServer/base/wp.base/shared/app/wp.base.jar /plugins/wp.base.jar for example: ln -s /usr/lpp/zPortalServer/V6R1/PortalServer/base/wp.base/shared/app/wp.base.jar /WebSphere/V6R1/DeploymentManager/plugins/wp.base.jar • Start Deployment Manager. Verify that the information in the wkplc.properties, wkplc_dbtype.properties, and wkplc_comp.properties files are correct on each node in the cluster: • Enter a value for the PortalAdminPwd and WasPassword parameters in the wkplc.properties file. • Ensure that the value of the XmlAccessPort property in wkplc_comp.properties matches the value of the port used for HTTP connections to the WebSphere Portal server. NOTE: If you are using Microsoft Internet Protocol (IP) Version 6 and you have specified the WpsHostName property as an Internet Protocol address, normalize the Internet Protocol address by placing square brackets around the IP address as follows: WpsHostName=[my.IPV6.IP.address]. • The WebSphere Portal Update Installer removes plain text passwords from the wkplc*.properties files. To keep these passwords in the properties files, include the following line in the wkplc.properties file: PWordDelete=false. • Ensure that the DbUser (database name) and DbPassword (database password) parameters are defined correctly for all database domains in the wkplc_comp.properties file. Use the link provided below to order the PTF for the 6.1.0.2 cumulative fix pack: e. If you plan to configure Computer Associates eTrust SiteMinder as your external security manager to handle authorization and authentication, the XML configuration interface may not be able to access WebSphere Portal through eTrust SiteMinder. To enable the XML configuration interface to access WebSphere Portal, use eTrust SiteMinder to define the configuration URL (/wps/config) as unprotected. Refer to the eTrust SiteMinder documentation for specific instructions. After the configuration URL is defined as unprotected, only WebSphere Portal enforces access control to this URL. Other resources, such as the /wps/myportal URL, are still protected by eTrust SiteMinder. If you have already set up eTrust SiteMinder for external authorization and you want to use XML Configuration Interface (xmlaccess), make sure you have followed the procedure to allow for xmlaccess execution. Ensure that automatic synchronization is disabled on all nodes to be upgraded, and stop the node agents on all Portal nodes in the Cell. When the automatic synchronization is enabled, the node agent on each node automatically contacts the deployment manager at startup and then every synchronization interval to attempt to synchronize the node's configuration repository with the master repository managed by the deployment manager. Because you must upgrade one node at a time to maintain 24x7 availability, you should turn off automatic synchronization to ensure that the nodes that are not yet upgraded do not inadvertently get any updated enterprise applications prematurely. • In the administrative console for the deployment manager, select System Administration > Nodes Agents in the navigation tree. • Click nodeagent for the required node. • Click File Synchronization Service. • Uncheck the Automatic Synchronization check box on the File Synchronization Service page to disable the automatic synchronization feature and then click OK. • Repeat these steps for all other nodes to be upgraded. • Click Save to save the configuration changes to the master repository. • Select System Administration > Nodes in the navigation tree • Select all nodes that are not synchronized, and click on Synchronize • Select System Administration > Node agents in the navigation tree • For the primary node, select the nodeagent and click Restart • Select the nodeagents of all secondary nodes and click Stop NOTE: Do not attempt to combine steps 3 and 4 together! The update must be performed sequentially, not in parallel on all of the server nodes in the cluster. Update the primary node first, then the secondary node and then any subsequent nodes, one at a time, in accordance with the below instructions. Perform the following steps to upgrade WebSphere Portal on the primary node: a. Stop IP traffic to the node you are upgrading: • If you are using Sysplex Distributor, then VARY TCPIP,,SYSPLEX,QUIesce,JOBNAME= or VARY TCPIP,,SYSPLEX,QUIesce,POrt=. Reactivate it by replacing the QUIesce with RESUME keyword. • If you are using IP sprayers for load balancing to the cluster members, reconfigure the IP sprayers to stop routing new requests to the Portal cluster member(s) on this node. • If you are using the Web server plug-in for load balancing, perform the following steps to stop traffic to the node: • In the Deployment Manager administrative console, click Servers>Clusters> cluster_name >Cluster members to obtain a view of the collection of cluster members. • Locate the cluster member you are upgrading and change the value in the Configured weight column to zero. NOTE: Record the previous value to restore the setting when the upgrade is complete. • Click Update to apply the change. • If automatic plug-in generation and propagation is disabled, manually generate and/or propagate the plugin-cfg.xml file to the Web servers. • Note that the web server plug-in will check periodically for configuration updates based on the value of Refresh Configuration Interval property for the Web server plug-in (default value is 60 seconds). You can check this value on the Deployment Manager administrative console by selecting Servers>Web Servers> web_server_name >Plug-in Properties. • If automatic propagation of the plug-in configuration file is enabled on the web server(s) disable it from the Deployment Manager administrative console by going to Servers>Web Servers>web_server_name>Plug-in Properties and unchecking Automatically propagate plug-in configuration file b. If necessary, perform the following steps to upgrade WebSphere Application Server on the node: • Log on to a telnet session with the WebSphere Administrative user ID (for example: WSADMIN), run the command./stopNode.sh -user was_admin_userid -password was_admin_password from the /bin directory to stop the Node Agent. • Upgrade WebSphere Application Server on the node, including the required interim fixes for WebSphere Portal. • Run the./startNode.sh command from the /bin directory to start the Node Agent c. Perform the following steps to run the installation program: • Check the status of all active application servers and stop any active application servers. • If you have installed WebSphere Portal or WCM APARs interim fixes, use Portal Update Installer to uninstall them before applying the 6.1.0.2 PTFs in SMP/E. For information on the Portal Update Installer for z/OS see the following link: • Apply the PTFs listed below for the 6.1.0.2 cumulative fix pack using SMP/E. Make sure your primary portal system is mounted with the newly maintained 6102 product HFS/zFS.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
January 2018
Categories |