」工欲善其事,必先利其器。「—孔子《論語.錄靈公》
首頁 > 程式設計 > 探索軟體工程師的就業市場

探索軟體工程師的就業市場

發佈於2024-11-08
瀏覽:446

Exploring Job Market for Software Engineers

Introduction

In this article, we dive into the process of extracting and analyzing job data from LinkedIn, leveraging a combination of Python, Nu shell, and ChatGPT to streamline and enhance our workflow.

I’ll walk you through the steps I took to carry out my research, showing how you can use these techniques to explore job markets in different countries or even in other fields. By combining these tools and methods, you can gather and analyze data to gain valuable insights into any job market you're interested in.

Technologies overview

Python

Python was chosen for its versatile libraries, particularly linkedin_jobs_scraper and openai. These packages streamlined the scraping and processing of job data.

Nu Shell

Nu shell was experimented with to compare its functionality against the traditional bash stack. This experiment aimed to explore its potential benefits in handling and manipulating data.

ChatGPT

ChatGPT was employed to assist in the extraction of specific job features from the collected data, such as years of experience, degree requirements, tech stack, position levels, and core responsibilities.

Data extraction

To start some data is required. LinkedIn was the first website that came to my mind and there was ready to use Python package. I've copied example code, modified it a little and got ready to use script to get a JSON file with a list of job descriptions. Here it's source:

import json
import logging
import os
from threading import Lock

from dotenv import load_dotenv

# linkedin_jobs_scraper loads env statically
# So dotenv should be loaded before imports
load_dotenv()

from linkedin_jobs_scraper import LinkedinScraper
from linkedin_jobs_scraper.events import EventData, Events
from linkedin_jobs_scraper.filters import ExperienceLevelFilters, TypeFilters
from linkedin_jobs_scraper.query import Query, QueryFilters, QueryOptions

CHROMEDRIVER_PATH = os.environ["CHROMEDRIVER_PATH"]

RESULT_FILE_PATH = "result.json"
KEYWORDS = ("Python", "PHP", "Java", "Rust")
LOCATIONS = ("South Korea",)
TYPE_FILTERS = (TypeFilters.FULL_TIME,)
EXPERIENCE = (ExperienceLevelFilters.MID_SENIOR,)
LIMIT = 500

logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)


def main():
    result_lock = Lock()
    result = []

    def on_data(data: EventData):
        with result_lock:
            result.append(data._asdict())

        log.info(
            "[JOB]",
            data.title,
            data.company,
            len(data.description),
        )

    def on_error(error):
        log.error("[ERROR]", error)

    def on_end():
        log.info("Scraping finished")

        if not result:
            return

        with open(RESULT_FILE_PATH, "w") as f:
            json.dump(result, f)

    queries = [
        Query(
            query=keyword,
            options=QueryOptions(
                limit=LIMIT,
                locations=[*LOCATIONS],
                filters=QueryFilters(
                    type=[*TYPE_FILTERS],
                    experience=[*EXPERIENCE],
                ),
            ),
        )
        for keyword in KEYWORDS
    ]

    scraper = LinkedinScraper(
        chrome_executable_path=CHROMEDRIVER_PATH,
        headless=True,
        max_workers=len(queries),
        slow_mo=0.5,
        page_load_timeout=40,
    )

    scraper.on(Events.DATA, on_data)
    scraper.on(Events.ERROR, on_error)
    scraper.on(Events.END, on_end)

    scraper.run(queries)


if __name__ == "__main__":
    main()

To download chrome driver I've made the following bash script:

#!/usr/bin/env bash
stable_version=$(curl 'https://googlechromelabs.github.io/chrome-for-testing/LATEST_RELEASE_STABLE')
driver_url=$(curl 'https://googlechromelabs.github.io/chrome-for-testing/known-good-versions-with-downloads.json' \
    | jq -r ".versions[] | select(.version == \"${stable_version}\") | .downloads.chromedriver[0] | select(.platform == \"linux64\") | .url")
wget "$driver_url"
driver_zip_name=$(echo "$driver_url" | awk -F'/' '{print $NF}')
unzip "$driver_zip_name"
rm "$driver_zip_name"

And my .env file looks like that:

CHROMEDRIVER_PATH="chromedriver-linux64/chromedriver"
LI_AT_COOKIE=

linkedin_jobs_scraper serializes jobs to the following DTO:

class EventData(NamedTuple):
    query: str = ''
    location: str = ''
    job_id: str = ''
    job_index: int = -1  # Only for debug
    link: str = ''
    apply_link: str = ''
    title: str = ''
    company: str = ''
    company_link: str = ''
    company_img_link: str = ''
    place: str = ''
    description: str = ''
    description_html: str = ''
    date: str = ''
    insights: List[str] = []
    skills: List[str] = []

Example sample (description was replaced with ... for better readability):

query location job_id job_index link apply_link title company company_link company_img_link place description description_html date insights skills
Python South Korea 3959499221 0 https://www.linkedin.com/jobs/view/3959499221/?trk=flagship3_search_srp_jobs Senior Python Software Engineer Canonical https://media.licdn.com/dms/image/v2/C560BAQEbIYAkAURcYw/company-logo_100_100/company-logo_100_100/0/1650566107463/canonical_logo?e=1734566400&v=beta&t=emb8cxAFwBnOGwJ8nTftd8ODTFDkC_5SQNz-Jcd8zRU Seoul, Seoul, South Korea (Remote) ... ... [Remote Full-time Mid-Senior level, Skills: Python (Programming Language), Computer Science, 8 more, See how you compare to 18 applicants. Try Premium for RSD0, , Am I a good fit for this job?, How can I best position myself for this job?, Tell me more about Canonical] [Back-End Web Development, Computer Science, Engineering Documentation, Kubernetes, Linux, MLOps, OpenStack, Python (Programming Language), Technical Documentation, Web Services]

Was generated with the following nu shell command:

# Replaces description of a job with elipsis
def hide-description [] {
    update description { |row| '...' } 
    | update description_html { |row| '...' } 
}

cat result.json 
| from  json 
| first 
| hide-description
| to md --pretty 

Last steps before analysis

We already have several ready to use features (title and skills), but I want more:

  • Years of experience
  • Degree
  • Tech stack
  • Position
  • Responsibilities

So let's add them with help of ChatGPT!

import json
import logging
import os

from dotenv import load_dotenv
from linkedin_jobs_scraper.events import EventData
from openai import OpenAI
from tqdm import tqdm

load_dotenv()

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
)

with open("result.json", "rb") as f:
    jobs = json.load(f)

parsed_descriptions = []

for job in tqdm(jobs):
    job = EventData(**job)
    chat_completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "user",
                "content": """
                    Process given IT job description. 
                    Output only raw JSON with the following fields:
                        - Experience (amount of years or null)
                        - Degree requirement (str if found else null)
                        - Tech stack (array of strings)
                        - Position (middle, senior, lead, manager, other (describe it))
                        - Core responsibilites (array of strings)

                    Output will be passed directrly to the
                    Python's `json.loads` function. So DO NOT APPLY MARKDOWN FORMATTING
                    Example:
                    ```


                    {
                        "experience": 5, 
                        "degree": "bachelor", 
                        "stack": ["Python", "FastAPI", "Docker"], 
                        "position": "middle",
                        "responsibilities": ["Deliver features", "break production"]
                    }


                    ```

                    Here is a job description:
                """
                  "\n\n"
                  job.description_html,
            }
        ],
    )

    content = chat_completion.choices[0].message.content
    try:
        if not content:
            print("Empty result from ChatGPT")
            continue
        result = json.loads(content)
    except json.decoder.JSONDecodeError as e:
        logging.error(e, chat_completion)
        continue

    result["job_id"] = job.job_id
    parsed_descriptions.append(result)

with open("job_descriptions_analysis.json", "w") as f:
    json.dump(parsed_descriptions, f)

Do not forget to add OPENAI_API_KEY to the .env file

Now we can merge by job_id results with data from LinkedIn:

cat job_descriptions_analysis.json 
| from json 
| merge (cat result.json | from json)
| to json
| save full.json

Our data is ready to analyze!

cat full.json | from json | columns
╭────┬──────────────────╮
│  0 │ experience       │
│  1 │ degree           │
│  2 │ stack            │
│  3 │ position         │
│  4 │ responsibilities │
│  5 │ job_id           │
│  6 │ query            │
│  7 │ location         │
│  8 │ job_index        │
│  9 │ link             │
│ 10 │ apply_link       │
│ 11 │ title            │
│ 12 │ company          │
│ 13 │ company_link     │
│ 14 │ company_img_link │
│ 15 │ place            │
│ 16 │ description      │
│ 17 │ description_html │
│ 18 │ date             │
│ 19 │ insights         │
│ 20 │ skills           │
╰────┴──────────────────╯

Analysis

For the start

let df = cat full.json | from json

Now we can see technologies frequency:

$df
| get 'stack' 
| flatten 
| uniq --count 
| sort-by count --reverse 
| first 20 
| to md --pretty
value count
Python 185
Java 70
AWS 65
Kubernetes 61
SQL 54
C 46
Docker 42
Linux 41
React 37
Kotlin 34
JavaScript 30
C 30
Kafka 28
TypeScript 26
GCP 25
Azure 24
Tableau 22
Hadoop 21
Spark 21
R 20

With Python:

$df
| filter-by-intersection 'stack' ['python']
| get 'stack' 
| flatten 
| where $it != 'Python' # Exclude python itself
| uniq --count 
| sort-by count --reverse 
| first 10
| to md --pretty
value count
Java 44
AWS 43
SQL 40
Kubernetes 36
Docker 27
C 26
Linux 24
R 20
GCP 20
C 18

Without Python:

$df
| filter-by-intersection 'stack' ['python'] --invert
| get 'stack' 
| flatten 
| uniq --count 
| sort-by count --reverse 
| first 10
| to md --pretty
value count
React 31
Java 26
Kubernetes 25
TypeScript 23
AWS 22
Kotlin 21
C 20
Linux 17
Docker 15
Next.js 15

The most of the jobs require Python, but there are some front-end, Java and C jobs

Magic filter-by-intersection function is a custom one and allow filtering list values that include given set of elements:

# Filters rows by intersecting given `column` with `requirements`
# Case insensitive and works only if ALL requirements exist in a `column` value
# If `--invert` then works as symmetric difference
def filter-by-intersection [
    column: string
    requirements: list
   --invert (-i)
] {
    let required_stack = $requirements | par-each { |el| str downcase }
    let required_len = if $invert { 0 } else { ($requirements | length )}
    $in
    | filter { |row| 
        $required_len == (
            $row 
            | get $column 
            | par-each { |el| str downcase } 
            | where ($it in $requirements) 
            | length
        )
    }
}

What about experience and degree requirement for each position in Python?

$df
| filter-by-intersection 'stack' ['python'] 
| group-by 'position' --to-table
| insert 'group_size' { |group| $group.items | length } 
| where 'group_size' >= 10
| insert 'experience' { |group| 
    $group.items 
    | get 'experience'
    | uniq --count  
    | sort-by 'count' --reverse 
    | update 'value' { |row| if $row.value == null { 0 } else { $row.value }}
    | rename --column { 'value': 'years' }
    | first 3 
} 
| insert 'degree_requirement' { |group| 
    $group.items 
    | each { |row| $row.degree != null } 
    | uniq --count 
    | sort-by 'value'
    | rename --column { 'value': 'required' }
}
| sort-by 'group_size' --reverse 
| select 'group' 'group_size' 'experience' 'degree_requirement'

Output:

╭───┬────────┬────────────┬───────────────────────┬──────────────────────────╮
│ # │ group  │ group_size │      experience       │    degree_requirement    │
├───┼────────┼────────────┼───────────────────────┼──────────────────────────┤
│ 0 │ senior │         83 │ ╭───┬───────┬───────╮ │ ╭───┬──────────┬───────╮ │
│   │        │            │ │ # │ years │ count │ │ │ # │ required │ count │ │
│   │        │            │ ├───┼───────┼───────┤ │ ├───┼──────────┼───────┤ │
│   │        │            │ │ 0 │     5 │    30 │ │ │ 0 │ false    │    26 │ │
│   │        │            │ │ 1 │     0 │    11 │ │ │ 1 │ true     │    57 │ │
│   │        │            │ │ 2 │     7 │    11 │ │ ╰───┴──────────┴───────╯ │
│   │        │            │ ╰───┴───────┴───────╯ │                          │
│ 1 │ other  │         14 │ ╭───┬───────┬───────╮ │ ╭───┬──────────┬───────╮ │
│   │        │            │ │ # │ years │ count │ │ │ # │ required │ count │ │
│   │        │            │ ├───┼───────┼───────┤ │ ├───┼──────────┼───────┤ │
│   │        │            │ │ 0 │     0 │     8 │ │ │ 0 │ false    │    12 │ │
│   │        │            │ │ 1 │     5 │     1 │ │ │ 1 │ true     │     2 │ │
│   │        │            │ │ 2 │     3 │     1 │ │ ╰───┴──────────┴───────╯ │
│   │        │            │ ╰───┴───────┴───────╯ │                          │
│ 2 │ lead   │         12 │ ╭───┬───────┬───────╮ │ ╭───┬──────────┬───────╮ │
│   │        │            │ │ # │ years │ count │ │ │ # │ required │ count │ │
│   │        │            │ ├───┼───────┼───────┤ │ ├───┼──────────┼───────┤ │
│   │        │            │ │ 0 │     0 │     5 │ │ │ 0 │ false    │     6 │ │
│   │        │            │ │ 1 │    10 │     4 │ │ │ 1 │ true     │     6 │ │
│   │        │            │ │ 2 │     5 │     1 │ │ ╰───┴──────────┴───────╯ │
│   │        │            │ ╰───┴───────┴───────╯ │                          │
│ 3 │ middle │         10 │ ╭───┬───────┬───────╮ │ ╭───┬──────────┬───────╮ │
│   │        │            │ │ # │ years │ count │ │ │ # │ required │ count │ │
│   │        │            │ ├───┼───────┼───────┤ │ ├───┼──────────┼───────┤ │
│   │        │            │ │ 0 │     3 │     4 │ │ │ 0 │ false    │     4 │ │
│   │        │            │ │ 1 │     5 │     3 │ │ │ 1 │ true     │     6 │ │
│   │        │            │ │ 2 │     2 │     2 │ │ ╰───┴──────────┴───────╯ │
│   │        │            │ ╰───┴───────┴───────╯ │                          │
╰───┴────────┴────────────┴───────────────────────┴──────────────────────────╯

Extraction of the most common requirements wasn't as easy as previous steps. So I've met a classification problem, and I'm going to describe my solution in the next chapter of this article.

Conclusion

We successfully extracted and analyzed job data from LinkedIn using the linkedin_jobs_scraper package. Responsibilities in the actual dataset are too sparse and need better processing to make functional classes that will help in CV creation. But the given steps already help me a lot with monitoring and applying to the jobs in half-auto mode.

版本聲明 本文轉載於:https://dev.to/suzumenobu/exploring-job-market-for-software-engineers-3li8?1如有侵犯,請聯絡[email protected]刪除
最新教學 更多>
  • CSS強類型語言解析
    CSS強類型語言解析
    您可以通过其强度或弱输入的方式对编程语言进行分类的方式之一。在这里,“键入”意味着是否在编译时已知变量。一个例子是一个场景,将整数(1)添加到包含整数(“ 1”)的字符串: result = 1 "1";包含整数的字符串可能是由带有许多运动部件的复杂逻辑套件无意间生成的。它也可以是故意从单个真理...
    程式設計 發佈於2025-05-09
  • 如何使用Regex在PHP中有效地提取括號內的文本
    如何使用Regex在PHP中有效地提取括號內的文本
    php:在括號內提取文本在處理括號內的文本時,找到最有效的解決方案是必不可少的。一種方法是利用PHP的字符串操作函數,如下所示: 作為替代 $ text ='忽略除此之外的一切(text)'; preg_match('#((。 &&& [Regex使用模式來搜索特...
    程式設計 發佈於2025-05-09
  • 如何在Chrome中居中選擇框文本?
    如何在Chrome中居中選擇框文本?
    選擇框的文本對齊:局部chrome-inly-ly-ly-lyly solument 您可能希望將文本中心集中在選擇框中,以獲取優化的原因或提高可訪問性。但是,在CSS中的選擇元素中手動添加一個文本 - 對屬性可能無法正常工作。 初始嘗試 state)</option> < o...
    程式設計 發佈於2025-05-09
  • 如何使用PHP從XML文件中有效地檢索屬性值?
    如何使用PHP從XML文件中有效地檢索屬性值?
    從php $xml = simplexml_load_file($file); foreach ($xml->Var[0]->attributes() as $attributeName => $attributeValue) { echo $attributeName,...
    程式設計 發佈於2025-05-09
  • 用戶本地時間格式及時區偏移顯示指南
    用戶本地時間格式及時區偏移顯示指南
    在用戶的語言環境格式中顯示日期/時間,並使用時間偏移在向最終用戶展示日期和時間時,以其localzone and格式顯示它們至關重要。這確保了不同地理位置的清晰度和無縫用戶體驗。以下是使用JavaScript實現此目的的方法。 方法:推薦方法是處理客戶端的Javascript中的日期/時間格式化和...
    程式設計 發佈於2025-05-09
  • 如何簡化PHP中的JSON解析以獲取多維陣列?
    如何簡化PHP中的JSON解析以獲取多維陣列?
    php 試圖在PHP中解析JSON數據的JSON可能具有挑戰性,尤其是在處理多維數組時。 To simplify the process, it's recommended to parse the JSON as an array rather than an object.To do...
    程式設計 發佈於2025-05-09
  • 如何在Java的全屏獨家模式下處理用戶輸入?
    如何在Java的全屏獨家模式下處理用戶輸入?
    Handling User Input in Full Screen Exclusive Mode in JavaIntroductionWhen running a Java application in full screen exclusive mode, the usual event ha...
    程式設計 發佈於2025-05-09
  • 為什麼Microsoft Visual C ++無法正確實現兩台模板的實例?
    為什麼Microsoft Visual C ++無法正確實現兩台模板的實例?
    The Mystery of "Broken" Two-Phase Template Instantiation in Microsoft Visual C Problem Statement:Users commonly express concerns that Micro...
    程式設計 發佈於2025-05-09
  • Python高效去除文本中HTML標籤方法
    Python高效去除文本中HTML標籤方法
    在Python中剝離HTML標籤,以獲取原始的文本表示Achieving Text-Only Extraction with Python's MLStripperTo streamline the stripping process, the Python standard librar...
    程式設計 發佈於2025-05-09
  • 如何有效地選擇熊貓數據框中的列?
    如何有效地選擇熊貓數據框中的列?
    在處理數據操作任務時,在Pandas DataFrames 中選擇列時,選擇特定列的必要條件是必要的。在Pandas中,選擇列的各種選項。 選項1:使用列名 如果已知列索引,請使用ILOC函數選擇它們。請注意,python索引基於零。 df1 = df.iloc [:,0:2]#使用索引0和1 ...
    程式設計 發佈於2025-05-09
  • 解決Spring Security 4.1及以上版本CORS問題指南
    解決Spring Security 4.1及以上版本CORS問題指南
    彈簧安全性cors filter:故障排除常見問題 在將Spring Security集成到現有項目中時,您可能會遇到與CORS相關的錯誤,如果像“訪問Control-allo-allow-Origin”之類的標頭,則無法設置在響應中。為了解決此問題,您可以實現自定義過濾器,例如代碼段中的MyFi...
    程式設計 發佈於2025-05-09
  • 如何使用Python有效地以相反順序讀取大型文件?
    如何使用Python有效地以相反順序讀取大型文件?
    在python 反向行讀取器生成器 == ord('\ n'): 緩衝區=緩衝區[:-1] 剩餘_size- = buf_size lines = buffer.split('\ n'....
    程式設計 發佈於2025-05-09
  • 如何使用替換指令在GO MOD中解析模塊路徑差異?
    如何使用替換指令在GO MOD中解析模塊路徑差異?
    在使用GO MOD時,在GO MOD 中克服模塊路徑差異時,可能會遇到衝突,其中3個Party Package將另一個PAXPANCE帶有導入式套件之間的另一個軟件包,並在導入式套件之間導入另一個軟件包。如迴聲消息所證明的那樣: go.etcd.io/bbolt [&&&&&&&&&&&&&&&&...
    程式設計 發佈於2025-05-09
  • Android如何向PHP服務器發送POST數據?
    Android如何向PHP服務器發送POST數據?
    在android apache httpclient(已棄用) httpclient httpclient = new defaulthttpclient(); httppost httppost = new httppost(“ http://www.yoursite.com/script.p...
    程式設計 發佈於2025-05-09
  • 使用jQuery如何有效修改":after"偽元素的CSS屬性?
    使用jQuery如何有效修改":after"偽元素的CSS屬性?
    在jquery中了解偽元素的限制:訪問“ selector 嘗試修改“:”選擇器的CSS屬性時,您可能會遇到困難。 This is because pseudo-elements are not part of the DOM (Document Object Model) and are th...
    程式設計 發佈於2025-05-09

免責聲明: 提供的所有資源部分來自互聯網,如果有侵犯您的版權或其他權益,請說明詳細緣由並提供版權或權益證明然後發到郵箱:[email protected] 我們會在第一時間內為您處理。

Copyright© 2022 湘ICP备2022001581号-3