Thursday, March 28, 2013

CCIE DC: Troubleshooting iSCSI for CCIE DC

Hi Guys

After just finishing watching the excellent INE Storage videos, I thought that the coverage on iSCSI could use some expansion, so I have quickly run up this blog to go through a few of the better iSCSI troubleshooting tools available to you.

Here is one of the first one's I discovered:

debug ips iscsi msgs


This awesome command helps you really drill down and see what the issue with iSCSI is when your not able to find any targets in Windows:



I couldn't at first work out what was going on, my config is shown below:

feature iscsi
iscsi enable module 1
  vsan 50 interface iscsi1/1
interface iscsi1/1

  no shutdown
!

Any idea what is wrong with the above? My zoning? Let's check that out

SANB# show zone active vsan 50
zone name ISCSI vsan 50
* fcid 0x0f0002 [symbolic-nodename iqn.1991-05.com.microsoft:nervmainpc]
* fcid 0xcd0000 [pwwn 50:01:10:a0:00:18:31:80] [Target_Port1]
* fcid 0x0f0000 [pwwn 50:01:10:a0:00:18:31:82] [Target_port2]

zone name FC vsan 50
* fcid 0x0f0000 [pwwn 50:01:10:a0:00:18:31:82] [Target_port2]
* fcid 0x0f0001 [pwwn 50:01:10:a0:00:18:31:e6] [Server1_Port2]


The asterisk next to the Fcid indicates that I have the correct zoning, let's check out show iscsi global:

SANB# show iscsi global
iSCSI/iSLB Global information (fabric-wide)
  Authentication: CHAP, NONE
  Initiator idle timeout: 300 seconds
  Dynamic Initiator: iSCSI
  iSLB Distribute: Disabled
  iSLB CFS Session: Does not exist
  Number of load balanced VRRP groups: 0
  Number of load-balanced initiators: 0
iSCSI/iSLB Global information (local to this switch)
  Import FC Target: Disabled
  Initiator Plogi timeout: 2 seconds
  Number of target node: 0
  Number of portals: 3
  Number of session: 0
  Failed session: 0, Last failed initiator name:



There is something in the above that shows what the problem is, but I still couldn't work out what was going on, so I enabled debug ips iscsi msgs:

Here is the output that I got that told me something was up:

SANB# 2013 Mar 28 18:58:51.567443 ips: Session Create (init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[] init-alias:[] ISID:[400001370000] indx:[0x02000000] init_ip:[10.0.0.83

The above showed me that the IQN was correct and i was coming from the expected IP address..

2013 Mar 28 18:58:51.569116 ips: Session Create Response: Init: node_name iqn.1991-05.com.microsoft:nervmainpc init_name iqn.1991-05.com.microsoft:nervmainpc init-nwwn 21:01:00:0d:ec:2b:3c:42 init-pwwn 21:01:00:0d:ec:2b:3c:42 init-fcid 0 isid 400001370000pgt 12288 num_auth method 2 nodeIndex 2target_name  tgt-nw



2013 Mar 28 18:58:51.576393 ips: Session Destroy node-name:[iqn.1991-05.com.microsoft:nervmainpc] init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[]ISID:[400001370000] indx:[0x02000000] failure-code:[1] 


The above showed me that the target list is empty?

 
2013 Mar 28 18:58:51.576929 ips: Session destroy response status 0 for init_name:[iqn.1991-05.com.microsoft:nervmainpc] target_name:[] isid:[400001370000] sent

Suddenly it hit me,

SANB# show iscsi global
iSCSI/iSLB Global information (fabric-wide)
  Authentication: CHAP, NONE
  Initiator idle timeout: 300 seconds
  Dynamic Initiator: iSCSI
  iSLB Distribute: Disabled
  iSLB CFS Session: Does not exist
  Number of load balanced VRRP groups: 0
  Number of load-balanced initiators: 0
iSCSI/iSLB Global information (local to this switch)
  Import FC Target: Disabled  Initiator Plogi timeout: 2 seconds
  Number of target node: 0
  Number of portals: 3
  Number of session: 0
  Failed session: 0, Last failed initiator name:



I wasn't actually importing any targets, I needed this command:


SANB(config)# iscsi import target fc


Now when I connected via the initiator I could see targets:



Here is the relevant debug output:

SANB# 2013 Mar 28 18:49:24.952423 ips: Session Create (init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[] init-alias:[] ISID:[400001370000] indx:[0x02000000] init_ip:[10.0.0.83
2013 Mar 28 18:49:24.954162 ips: Session Create Response: Init: node_name iqn.1991-05.com.microsoft:nervmainpc init_name iqn.1991-05.com.microsoft:nervmainpc init-nwwn 21:01:00:0d:ec:2b:3c:42 init-pwwn 21:01:00:0d:ec:2b:3c:42 init-fcid 0 isid 400001370000pgt 12288 num_auth method 2 nodeIndex 2target_name  tgt-nw
2013 Mar 28 18:49:24.963807 ips: Session Destroy node-name:[iqn.1991-05.com.microsoft:nervmainpc] init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[]ISID:[400001370000] indx:[0x02000000] failure-code:[1]
2013 Mar 28 18:49:24.964295 ips: Session destroy response status 0 for init_name:[iqn.1991-05.com.microsoft:nervmainpc] target_name:[] isid:[400001370000] sent
2013 Mar 28 18:49:28.690124 ips: Session Create (init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[iqn.1987-05.com.cisco:05.sanb.01-01.500110a000183180] init-alias:[] ISID:[400001370000] indx:[0x02000000] init_ip:[10.0.0.83] 
2013 Mar 28 18:49:28.691226 ips: Querying NS for target pwwn 50:01:10:a0:00:18:31:80 vsan 50 filter fcid 000f0002
2013 Mar 28 18:49:28.692556 ips: NS target response vsan:[50] fcid:[00cd0000] for target pwwn 50:01:10:a0:00:18:31:80
2013 Mar 28 18:49:28.693656 ips: Session Create Response: Init: node_name iqn.1991-05.com.microsoft:nervmainpc init_name iqn.1991-05.com.microsoft:nervmainpc init-nwwn 21:01:00:0d:ec:2b:3c:42 init-pwwn 21:01:00:0d:ec:2b:3c:42 init-fcid f0002 isid 400001370000pgt 12288 num_auth method 2 nodeIndex 2target_name iqn
2013 Mar 28 18:49:30.935782 ips: Session Create (init-name:[iqn.1991-05.com.microsoft:nervmainpc] tgt-name:[iqn.1987-05.com.cisco:05.sanb.01-01.500110a000183182] init-alias:[] ISID:[400001370000] indx:[0x02000000] init_ip:[10.0.0.83] 
2013 Mar 28 18:49:30.936780 ips: Querying NS for target pwwn 50:01:10:a0:00:18:31:82 vsan 50 filter fcid 000f0002
2013 Mar 28 18:49:30.938088 ips: NS target response vsan:[50] fcid:[000f0000] for target pwwn 50:01:10:a0:00:18:31:82
2013 Mar 28 18:49:30.939453 ips: Session Create Response: Init: node_name iqn.1991-05.com.microsoft:nervmainpc init_name iqn.1991-05.com.microsoft:nervmainpc init-nwwn 21:01:00:0d:ec:2b:3c:42 init-pwwn 21:01:00:0d:ec:2b:3c:42 init-fcid f0002 isid 400001370000pgt 12288 num_auth method 2 nodeIndex 2target_name iqn




In conclusion, even the simplest mistakes are easily identified when your debugging, you should be debugging the minute your having problems in the CCIE DC, you don't have time to waste wondering what is wrong with your config!

I hope this helps someone out there

Monday, March 25, 2013

ISR G2 Licensing Trials

Hi Guys

I had trouble finding this myself on cisco.com so i thought I would share, how to enable trial licensing

(config)#license boot module c2900 technology-package ?
  datak9      data technology
  securityk9  security technology
  uck9        unified communications technology

You will need to reboot the router after this, but once done the trial license will be available.


Sunday, March 24, 2013

CCIE DC: Quick tip for determing neighbors in a fibre channel network

Hi Guys

Super quick blogpost just because I noticed it while studying, a super easy way to get a list of your fibre channel neighboring switches IP addresses is the following command:

SANA# show cfs peers

Physical Fabric
-------------------------------------------------------------------------
 Switch WWN              IP Address            
-------------------------------------------------------------------------
 20:00:00:0d:ec:2d:4f:40 10.0.0.32                               [Local]
                         SANA                                   
 20:00:00:0d:ec:2b:3c:40 10.0.0.62                             


This will show you all switches in the fabric, this can be quite useful! You can't see neighbors over CDP with show cdp neighbor on a FC network because obviously they are not connected via ethernet. The above command will show you.

I hope this helps someone out there


Update: As per Brian's comments below, the show topology section is even better because it shows you what interface the device is out too


SANA#  show topology

FC Topology for VSAN 1 :
--------------------------------------------------------------------------------
       Interface  Peer Domain Peer Interface     Peer IP Address
--------------------------------------------------------------------------------
           fc1/14 0xe4(228)           fc1/14  10.0.0.62
   port-channel 1 0xe4(228)   port-channel 1  10.0.0.62

FC Topology for VSAN 10 :
--------------------------------------------------------------------------------
       Interface  Peer Domain Peer Interface     Peer IP Address
--------------------------------------------------------------------------------
           fc1/14 0x64(100)           fc1/14  10.0.0.62
   port-channel 1 0x64(100)   port-channel 1  10.0.0.62

FC Topology for VSAN 20 :
--------------------------------------------------------------------------------
       Interface  Peer Domain Peer Interface     Peer IP Address
--------------------------------------------------------------------------------
           fc1/14  0x3c(60)           fc1/14  10.0.0.62
   port-channel 1  0x3c(60)   port-channel 1  10.0.0.62
SANA# show cfs


Saturday, March 9, 2013

Cisco Communications Manager 9.0 Joins the 21st Century with Native Call Queuing, Paging

Those who know me know I am the biggest Cisco fanboy going. If they farted i'd probably explain how Cisco Cleanair technology helps ensure that wireless traffic is not affected by their farts.

But even I cannot fathom or accept the terribly long time it took them to introduce native paging and call queuing to CUCM, especially when you consider CME has had both of these features for quite a while now.


However! That Day has finally come!

Let's chat about what you can accomplish with this, this blog post won't go through how to configure but we will chat about what you can do with them

Paging

Paging in CUCM 9.0 requires you to deploy a separate virtual machine called SingleWire Informacast, which is a third party program for paging that has been available for CUCM seperately for quite some time now. The fact that you have to deploy another virtual machine just for paging is somewhat annoying.

Singlewire can do all sorts of funky things including sending messages with pictures, sending text with pictures to phones, live broadcast, ad-hoc broadcasts scheduled broadcasts and much more. However if you want it to be "Free" with CUCM you can only use the basic version, which allows live broadcasting to up to 50 phones in a group. You can try a 60 day trial of the advanced version when you install Singlewire, to be honest don't give your real email address: they seem to love spamming me after I asked to try the advanced version.


Native Queuing

The feature we have all been waiting for!



The screenshot above should give you an example of the kind of features we have available.

A Music on hold source is selected for your call queue, if all your line group members in your hunt group are busy, the call hold music is played, you can specify a maximum number of calls and have overflows such as:

  • Overflow no answer
  • Overflow max wait
  • Overflow no agents 

This is much better than what i was expecting they would give you: I suspected that the amount of options available would be somewhat limited, but happily I was proven incorrect. all of the above is just found under your standard Hunt Pilot.

You can also see from the top page that for all hunt groups (not just this one) they have finally given you the option of using the line group members forward settings! This often caused problems so is great to see the option finally available.

You can even specify that a user should be logged out of the hunt group if they fail to answer. (Be sure to assign the user the HLOG Softkey if your going to do this though.)

You can even now setup new music on hold sources with periodic messages:




 As you can see from the announcement settings at the bottom, you can specify seperate MOH for the initial announcement, and have periodic announcements at predetermined intervals.


Conclusion
Both these features are a long time coming, while paging is a little disappointing as a separate application and virtual machine, the queuing functionality makes up for it: 90 percent of the functionality that most smaller, helpdesk based contact centers will require is adequately covered with this functionality.