Python script that uses Splunk web log in to authenticate, then searches Splunk logs (non admin user, using requests)

I faced an issue where, in order to use the Splunk python module (splunklib, splunk-sdk) for obtaining search results, I kept getting permission related errors. In multiple online posts it was suggested that one needs to be a Splunk Admin to use these. I was not able to obtain a Splunk admin user account. I decided to write a small Python script that logs in as a regular user through the Splunk web log in page, performs a search and then returns those search results. This was done using requests module. Can be optimized in numerous ways but it does the job in its current form too. Using Splunk Version 7.0.0. The final goal would be to include it as part of automated test suite that would check Splunk logs after running auto tests than fail if errors are present.

# The Beginning

# Some required imports
import requests,json, re, time
from bs4 import BeautifulSoup

# base Splunk URL in your set up
Splunk_base_url = "https://your.splunk.url"

# Assemble Splunk log in page URL
login_page_url = Splunk_base_url + "/en-US/account/login"

# headers for various POSTs below
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36",
    "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
    "Server": "Splunkd"
}

# Splunk username and password. Use your own.
username = "splunk_username"
password = "splunk_password"

# This request GETs Splunk log in page and starts the session. 
s = requests.Session()
s.headers.update(headers)
login_page_get_request = s.get(login_page_url)

# We also need to parse this GET response html and obtain the cval parameter.
# cval is required for POST request later that will actually log us in.
result = re.search('"cval":(.*),"time":', login_page_get_request.text)
cval = result.group(1)

# Now let's log in.

# Data for log in POST
initial_login_data = {
    "cval": cval,
    "username": username,
    "password": password,
    "return_to": "/en-US/",
}

# Log in POST request is here
login_page_post_request = s.post(login_page_url, data=initial_login_data,headers= headers)

# Now that we are logged in, we need to get the FORM_KEY parameter which is required later. 
# FORM_KEY can be obtained by GETting JSON response from the url below.
get_config_url = Splunk_base_url + "/en-US/config"
get_config = s.get(get_config_url)

# Parse returned JSON and define FORM_KEY
get_config_json = json.loads(get_config.text)
FORM_KEY = get_config_json["FORM_KEY"]

# Define the URL for the search POST
post_search_url = Splunk_base_url + "/en-US/splunkd/__raw/services/search/jobs"

# Data required for the search POST
# Please note that this is where the actual search query is. You can replace with anything, include variables and similar.
post_search_body = {'search': 'search index=* AND (level NOT (WARN OR INFO OR DEBUG)) earliest=-30minutes'}

# headers required for the search POST
headers = {
    "X-Requested-With" : "XMLHttpRequest",
    "User-Agent": "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36",
    "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8",
    "X-Splunk-Form-Key": FORM_KEY
}

# Execute the search POST here
post_search = s.post(url=post_search_url,data=post_search_body,headers=headers)

# Ge the search job sid from the search POST XML response
soup = BeautifulSoup(post_search.text, "lxml")
search_job_sid = soup.find("sid").text

# Now, one needs to wait for the search to be completed. There are ways to implement this elegantly. This is not that way. I just limit to 5 seconds as that always works in my environment.
# Explicit wait (not optimal) while the results are obtained.
time.sleep(5)

# Now execute GET with that job sid to get the actual search results in JSON format
get_search_results_url = Splunk_base_url + "/en-US/splunkd/__raw/services/search/jobs/" + search_job_sid + "/results?output_mode=json&offset=0&count=20"
get_search_results = s.get(get_search_results_url)

# Load up the returned JSON from request above that contains the search results
get_search_results_json = json.loads(get_search_results.text)

# Some rudimentary formatting to parse search result JSON and show the search results one by one. Better to use your own.
issue_number = 1
if not get_search_results_json["results"]:
    print ("No search results returned! Bye ...")
else:
    for splunk_results in get_search_results_json["results"]:
        print ("============== issue start ================")
        print ("issue #" + str(issue_number))
        print ("")
        print (splunk_results["_raw"])
        print ("")
        issue_number =issue_number + 1
        print ("============== issue end ================")
# The End

Blender Python script to set up animation in which random text appears on screen

This script generates animation with a random number popping up on the screen periodically. In this specific case, frames go from 1 to 590. Numbers appear every 5 frames. At 25fps, this can generate 590/25=23.6 seconds of animation.

You can see an example here of this exact script in this video. This script’s specific section is from 6 second marker to approximately 25 second marker.

import bpy
import random

# text will appear every five frames, from frame 1 to 590, set up loop for that
# I rendered at 25fps, which means this would give you 590/25=23.6 seconds of appearing numbers
for counter in range(1,590,5):

    # text object
    text001_ops_object = bpy.ops.object
    text001_ops_object.text_add()

    # set characteristics of text object
    text001_context_object = bpy.context.object
    # each 5th frame will show one of the text choices below selected randomly
    text_choice = random.choice(["1.99","2.99","3.99","4.99","5.99","6.99","7.99","8.99","9.99"])
    text001_context_object.data.body = text_choice
    # location of the text, this was based on my manually created scene
    text001_context_object.location = [random.randint(-5,3),random.randint(-3,2),random.randint(1,3)]
    # you can adjust the size/scaling of the text here with scaling for each of the x,y,z axes
    text001_context_object.scale = 1,1,1
    # set text object name, easier to manipulate later
    text001_context_object.name = "name_9a"

    # hide each text object initially at frame zero, important to set both hide and hide render
    text001_context_object.hide = True
    text001_context_object.hide_render = text001_context_object.hide
    text001_context_object.keyframe_insert(data_path="hide", frame=0,index=-1)
    text001_context_object.keyframe_insert(data_path="hide_render", frame=0,index=-1)

    # now show the text object in appropriate frame (every fifth frame from the for loop)
    appearing_frame = counter
    text001_context_object.hide = False
    text001_context_object.hide_render = text001_context_object.hide
    text001_context_object.keyframe_insert(data_path="hide", frame=appearing_frame,index=-1)
    text001_context_object.keyframe_insert(data_path="hide_render", frame=appearing_frame,index=-1)

    # set random colors for the text that is being added
    text001_data_materials = bpy.data.materials.new('visuals')
    text001_data_materials.diffuse_color = (random.random(),random.random(),random.random())
    text001_context_object.data.materials.append(text001_data_materials)
    text001_context_object.active_material.keyframe_insert("diffuse_color",frame=appearing_frame)

Python Selenium chromedriver : StaleElementReferenceException: Message: stale element reference: element is not attached to the page document .. some notes

When switching to using chromedriver and away from PhantomJS these messages were everywhere. They are a bit misleading.

StaleElementReferenceException: Message: stale element reference: element is not attached to the page document

A lot of these can be fixed with waiting for the element to fully load before trying to access it. In the cases we ran into, almost all stale element references were due to javascript not fully loaded before the element could be accessed.

A simple time.sleep() would immediately fix it. Then we searched for more elegant ways to wait for the element.

The error message mentions stale element which one might associate with old and outdated, but in fact it’s actually too new.

Python Selenium Chrome driver : element is not clickable at point. Other element would receive the click (Solved)

This is a common issue with Chrome Driver. There are a bunch of discussion boards where requests are made to have this fixed but for some reasons Chrome developers are stubborn in not fixing it. How about they ask their customers what they would want!

Anyways, this helps. It moves the element into view. For a set of tests we had that worked fine with PhantomJS (too bad dev was discontinued) and Firefox, errors came up with Chrome. This was along the lines of : element is not clickable at point. Other element would receive the click.

Before the element that has that error, you could input this line (change the element please, unless you are clicking on Good Morning! text:

self.driver.execute_script("arguments[0].scrollIntoView();", self.driver.find_element_by_link_text('Good Morning!'))

That fixed it for us and did not break PhantomJS and/or Firefox compatibility.

HDD to SSD slow transfer using SATA to USB3 cable (somewhat solved)

As just about everyone else I am switching my hard drive to an SSD.
Purchased the Samsung 850 EVO 1TB 2.5-Inch SATA III Internal SSD (MZ-75E1T0B/AM). That all arrived well.
Also then bought a Sabrent cable, SATA to USB 3.0 (under $10).
The problems started occurring when I was cloning the drives using Samsung’s Data Migration Software.
Note : My existing drive (SATA II internal) has been showing errors when using various HD inspection tools so that is one possible cause of slowness. It was also becoming very slow during general operation which was the reason to switch to SSD.

  1. I was cloning approximately 670GB of data on 931TB drive
  2. On the initial attempt transfer rate was 1MB/s. The estimate was 300+ hours to complete so I aborted that.
  3. I then plugged the cable into the USB 2.0 port. That produces speeds around 25MB/s but still very slow
  4. Then I switched back to plugging it into the USB 3.0 port. This time, speeds got up to 59MB/s and the promised completion time was under 4 hours which is acceptable. However, half way through it slowed down to 1MB/s and I had to cancel.
  5. Following some online research I proceeded to: disable Microsoft real time virus protection, disconnected the network drives that were mounted and also disconnected the Ethernet cable from the PC completely diabling networking.
  6. That worked. Speeds were again near 60MB/s and the clone took under 3.5 hours. Success.

I do know that USB 3.0 should provide higher transfer rates, but at this point I was just happy to get this done. I did not have time to investigate more.

PyCharm 2017.3 causing issues when nosetests runs into errors (Python Selenium)

After upgrading to PyCharm 2017.3 I am not able to run my tests properly. Any time an error is encountered, the test case does not fail and move on, it just stops executing. Downgrading to PyCharm 2017.2.4 resolves the problem.

Here are the errors I get:

jb_nosetest_runner.py", line 17, in
nose.main(addplugins=[TeamcityReport()])

... etc etc etc .....

in formatError
ec, ev, tb = err
TypeError: 'NoneType' object is not iterable

Selenium Python configuration for Chrome in headless mode

Here is what worked for me related to Chrome in headless mode:

First import a few modules:

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

Then set the driver:

chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.binary_location = "C:/Program Files (x86)/Google/Chrome/Application/chrome.exe"
self.driver = webdriver.Chrome(executable_path="C:/webdriver/chromedriver.exe",chrome_options=chrome_options)

Please make sure to change the binary location to where Chrome is installed on your system. Also, you will need the latest chromedriver.exe executable (downloadable from here : https://sites.google.com/a/chromium.org/chromedriver/.). You will need to put its location in the executable_path. You will need Selenium 3.8.0 approximately as well.

REST Client for Visual Studio Code is nice

Get it from the extension manager in VS Code: REST Client by Huachao Mao

GitHub fun is here:
https://github.com/Huachao/vscode-restclient

I wanted to include some examples, but then I saw the documentation in VS Code for this extension and it’s very good. Just scroll through the Details tab and get the instructions how to use. It’s quite simple, including, something like GET https://httpbin.org/uuid for a simple get.